I am using the binary. Just running it inside a container instead of a VM.
overlay fs?
Yes.
Since originally writing the post I have switched to a rootless podman
container. Running it how I did before (inside a VM) would simply yield user_id=1000,group_id=1000
I think.
rw,nosuid,nodev,relatime,user_id=0,group_id=0
Thank you!
Fair. I will try NFS if anything else fails. Thanks :)
I strongly disagree why this would not be beneficial. Could you expand?
I don’t understand what you mean with the content disappearing when you mount the virtiofs on the guest - isn’t the mount empty when bound, untill the guest populates it?
Sorry I made a mistake in the original post. I wanted to say on the host instead of on the guest. My bad.
Yes, you are correct, the folder is empty until I log in insde the cloud application on the guest.
does it require local storage or support remote?
What do you mean? The cloud drive is a network drive basically. It only downloads files on demand.
if guest os is linux, nfs will probably do
This is what others have suggested and what I will probably do if the method below fails.
podman/docker seems to be the proper tool for you here
Yesterday I actually tried to spin a podman
container hoping it would work but I encountered the following problem when trying to propagate mounts:
https://lemmy.ml/post/22215540
Could you please assist me there if you have further ideas? Thank you :)
Keep in mind that a screwup could be interpreted by the sync client as mass-deletes
I am VERY aware of this *sweating*
do you need them all at the same time?
I need to access all files conveniently and transparently depending on what I need at work in that particular moment.
are they mostly the same size and type?
Hard no.
What would be the performance implications? Isn’t virtiofs
theoretically faster?
Every other solution looks more elegant on paper but has lots of pitfalls
A very sane and fair comment.
Why not NFS? Regardless, wouldn’t it be slower anyway compared to virtiofs
?
strace can be very verbose and requires a lot of knowledge that i doubt i can share through comments back and forth.
No worries. Thank a lot nonetheless.
is creating an intermediary like others have commented on in this post an option?
What do you mean by intermediary? Do you mean syncing the files with the VM and then sharing the synced copy with the host?That wouldn’t work since my drive is smaller than the cloud drive and I need all the files on-demand.
It does not, hence my question.
do you have to provide a username/password or token when you try to access the drive now?
I do but it’s through the proprietary GUI of the binary which has no CLI or API I can use.
I just checked and it is mounted as a fuse
drive.
do you know how to use strace?
A very confident NO :)
The cloud binary is proprietary and it’s not supported by rclone
unless I find out how the binary works but I doubt it uses something standardized like WebDAV underneath.
I can try but I might end up in the same situation as with virtiofs
. The cloud drive will get unmounted and I will end up with an empty folder when I try to access it from the host.
The cloud drive is mounted on the guest, yes, but once I mount it with virtiofs
in order to share it with the host it gets unmounted and I end up with an empty folder. bind
doesn’t work either.
Because the executable is proprietary (and a bit legacy I would say) and full of telemetry, undocumented and the cloud service has no CLI, WebDAV or rclone support. I do not want to run something like that on my personal computer and I do not know how to use
bwrap
properly and don’t want to risk it. I have since switched over to apodman
container but I encounter the same problem, the folder is empty on the host (See my post here: https://lemmy.ml/post/22215540).