Sorry but I can’t think of another word for it right now. This is mostly just venting but also if anyone has a better way to do it I wouldn’t hate to hear it.
I’m trying to set up a home server for all of our family photos. We’re on our way to de-googling, and part of the impetus for the change is that our Google Drive is almost full.We have a few hundred gigs of photos between us. The problem with trying to download your data from Google is that it will only allow you to do so in a reasonable way through Google takeout. First you have to order it. Then you have to wait anywhere from a few hours to a day or two for Google to “prepare” the download. Then you have one week before the takeout “expires.” That’s one week to the minute from the time of the initial request.
I don’t have some kind of fancy California internet, I just have normal home internet and there is just no way to download a 50gig (or 2 gig) file in one go - there are always intrruptions that require restarting the download. But if you try to download the files too many times, Google will give you another error and you have to start over and request a new takeout. Google doesn’t let you download the entire archive either, you have to select each file part individually.
I can’t tell you how many weeks it’s been that I’ve tried to download all of the files before they expire, or google gives me another error.
There’s no financial incentive for them to make is easy to leave Google. Takeout only exists to comply with regulations (e.g. digital markets act), and as usual, they’re doing the bare minimum to not get sued.
Or why is Google Takeout as good as it is? It’s got no business being as useful as it is in a profit-maximizing corpo. 😂 It can be way worse while still technically compliant. Or expect Takeout to get worse over time as Google looks into undermaximized profit streams.
Probably because the individual engineers working on Takeout care about doing a good job, even though the higher-ups would prefer something half-assed. I work for a major tech company and I’ve been in that same situation before, e.g. when I was working on GDPR compliance. I read the GDPR and tried hard to comply with the spirit of the law, but it was abundantly clear everyone above me hadn’t read it and only cared about doing the bare minimum.
Most likely. Plus Takeout appeared way before Google was showing any profit maximization signs and didn’t even hold the monopoly position it does hold today.
Honestly I thought you were going to bitch about them separating your metadata from the photos and you then having to remerge them with a special tool to get them to work with any other program.
omg they WHAT
I’m not really looking forward to that step either
Lmao I am both amused and horrified that I had somehow never come across this datapoint before
immich has a great guide to move a takeout from google into immich
Links or it didn’t happen
Thank you! The goal is to set up immich. It’s my first real foray into self hosting, and it seems close enough to feature parity with Google that the family will go for it. I ran a test with my local photos and it works great, so this is the next step.
https://github.com/simone-viozzi/my-server
this is my setup, ofc is still and will always be working in progress
I know it’s not ideal, but if you can afford it, you could rent a VPS in a cloud provider for a week or two, and do the download from Google Takeout on that, and then use sync or similar to copy the files to your own server.
I don’t know how to do any of that but I know it will help to know anyway. I’ll look into it. Thanks
Be completely dumb and install a desktop OS like Ubuntu Desktop. Then remote into it, and use the browser just as normal to download the stuff on it. We’ll help you with moving the data off it to your local afterwards. Critically the machine has to have as much storage as needed to store all of your download.
Instead of having to do an Operating system setup with a cloud provider, maybe another cloud backup service would work. Something like Backblaze can receive your Google files. Then you can download from Backblaze at your leisure.
https://help.goodsync.com/hc/en-us/articles/115003419711-Backblaze-B2
Or use the filters by date to limit the amount of takeout data that’s created? Then repeat with different filters for the next chunk.
I was gonna suggest the same.
Use this. It’s finnicky but works for me. You have to start the download on one device, then pause it, copy the command to your file server, then run it. It’s slow and you can only do one at the time, but it’s enough to leave it idling
Google takeout is there so they are technically compliant with rules that say you must be able to download your personal data, but they make it so inconvenient to use that practically it’s almost impossible to download it. Google photos isn’t a backup service so much as a way for Google to hold your photos hostage until you start paying for higher amounts of storage. And by the time you need that storage, Google takeout download has become impractical.
It doesn’t have an option to split it?
When I did my Google takeout to delete all my pics from Google photos there was an option to split in like “one zip every 2gb”
The first time I tried it in the two gigabyte blocks. The problem with that is I have to download them one or two at a time. It’s not very easy to do over the course of a week on a normal internet connection. Keep in mind, I also have a job.
I got about 50 out of 60 files before the one week timer reset and I had to start all over.
Apparently you can save it to Google drive then download the Google drive program and make that folder available offline so it downloads it to the computer.
-
When you setup the Google Takeout export choose Save in a Google Drive folder
-
Install the Google Drive PC client (Drive for desktop)
-
It will create a new drive (i.e. G:) in your explorer. Right click on the takeout folder and select “Make available offline”. All files in that folder will be downloaded by the Google Drive Desktop in the background, and you will be able to copy to another location, as they will be local files.
-
You could look into using a download manager. No reason for you to manually start each download in sequence if there’s a way to get your computer to automatically start the next as soon as one finishes.
Any recommendations? Windows or Linux?
jDownloader
It is the most patient downloader I know.
Definitely recommend Motrix:
If the Google download link supports it, it should be fairly resistant to interruptions. If it doesn’t, this might not help much, but you should still use this instead of just a browser.
I haven’t tried to download a Google takeout, so you might need to get clever with how you add the download link to it.
If you just can’t get it to work, you can try getting the browser extension to automatically send all downloads to Motrix. There is some setup required, though:
https://github.com/gautamkrishnar/motrix-webextension
Good luck!
I used uGet on windows, and it was fairly smooth. Not google, but an equally annoying large download. I believe it’s on Linux as well.
Google takeout is the best gdpr compliant platform of all the big tech giants. Amazon for example lets you wait until the very last day they legally can.
Also they do minimal processing like with the metadata (as others commented) as it is probably how they internally store it and that’s what they need to deliver. The simple fact that you can select what you want to request and not having to download everything about you makes it good in my eyes.
I actually see good faith compliance with the gdpr in the Plattform
It could absolutely be worse. The main problem is the lack of flexibility - If I could ask for an extension after downloading 80% of the files over a week, that would be helpful for example. I’m also beginning to suspect that they cap the download speed because I am seeing similar speeds on my home and work network…
I think this is a bit unfair. Most Google Takeout requests are fulfilled in seconds or minutes. Obviously collating 100GB of photos into a zip takes time.
And it’s not googles fault you have internet issues: even a fairly modest 20Mbps internet connection can do 50GB in 6h. If you have outages that’s on your ISP not Google. As others have said, have it download to a VPS or Dropbox etc then sync it from there. Or call your ISP and tell them to sort your line out, I’ve had 100℅ uptime on my VDSL copper line for over 2 years.
I was able to use Google Takeout and my relatively modest 50Mbps connection to successfully Takeout 200GB of data in a couple of days.
What download manager did you use? I’ve tried with whatever’s built into Firefox on two different networks and similar results. The downloads freeze every so often and I have to restart them (it picks up where it left off). Sometimes it just won’t reconnect, which I’m guessing is a timeout issue with Google, although I really have no idea.
I don’t ever have to manage downloads of this size, so sorry if it’s an obvious question
Not OP, but I use this download manager. It has been good.
Definitely misread that as Download The Mall and was quite amused by the name until I checked the link to see more lol
Not sure if somebody mentioned, but you can export to one drive. So you can get a 1TB account for a free trial or for a single month and export everything there as simple files, no large zips. Then with the app download to the computer and then cancel one drive.
Pretend to be in California/EU and then ask full removal of all your data on both Microsoft and google
This route may be the answer. I didn’t have success so far in setting up a download manager that offered any real improvements over the browser. I wanted to avoid my photos being on two corporate services, but as you say, in theory everything is delete-able.
Because Google don’t want you to export your photos. They want you to depend on them 100%.
deleted by creator
Yeah, of course it varies place to place but I think for the majority of at least somewhat developed countries and urban areas in less developed countries 50Mbps is a reasonable figure for “normal home internet” - even at 25Mbps you’re looking at 4½ hours for 50GB which is very doable if you leave it going while you’re at work or just in the background over the course of an evening
Edit: I was curious and looked it up. Global average download is around 50-60Mbps and upload is 10-12Mbps.
they must have dialup or live in the middle of nowhere
deleted by creator
The part that is Google’s fault is that they limit the number of download attempts and the files expire after 1 week. That should be clear form the post.
Well then read it “shitty rural internet.” Use context clues.
deleted by creator
Can you do one album at a time? Select the albums you want to download, then do that file. Then do the next few albums. That way you have direct control over the data you’re getting in each batch, and so you’ll have a week to get that batch instead of having to start again if the whole thing didn’t finish in a week.
That may be a thought. I could organize the photos first and then do multiple takeouts. Thanks
I have fancy California Internet and the downloads are surprisingly slow and kept slowing down and turning off. It was such a pain to get my data out of takeout.
Just gone through this whole process myself. My god does it suck. Another thing you’ll want to be aware of around Takeout with Google Photos is that the photo metadata isn’t attached as EXIF like with a normal service, but rather it’s given as an accompanying JSON file for each image file. I’m using Memories for Nextcloud, and it has a tool that can restore the EXIF metadata using those files, but it’s not exact and now I have about 1.5k images tagged as being from this year when they’re really from 2018 or before. I’m looking at writing my own tool to restore some of this metadata but it’s going to be a right pain in the ass.
Wow thanks for that. I was looking into https://github.com/TheLastGimbus/GooglePhotosTakeoutHelper but I haven’t gotten to that step yet
Ooh, might look into that instead, actually. I always love a reason to write myself a little tool, but dealing with Google’s bull makes it much less appealing to me when existing tools can do it for me.
There exist tools for this from takeout archives already, its how I migrated to immich.
They also don’t always keep the metadata in the same archive (zip or tar) with the pictures they belong with, and that can throw off imports with tools that process Google Takeout archives directly. Its a pretty nasty solution, for real.
I moved about 140GB to ente.io before they had their newer takeout process, but some destinations can enable third party apps (like rclone) to do cloud to cloud. Nor sure which work best, since I couldn’t go that route myself.
Ah great, that could be why a bunch of my photos didn’t get metadata. I’ll look into that, thanks for the tip.
It’s bad because they don’t want you to use it, but they made it exist so that they don’t get sued by the European Union.
Try this then do them one at the time. You have to start the download in your browser first, but you can click “pause” and leave the browser open as it downloads to your server