使用 Python 从 Google-Drive 下载大文件夹

我正在尝试使用 Python 将一个包含 50000 个图像的大文件夹从我的 GDrive 下载到本地服务器中。以下代码收到限制错误。任何替代解决方案?

import gdown
url = 'https://drive.google.com/drive/folders/135hTTURfjn43fo4f?usp=sharing'  # I'm showing a fake token
gdown.download_folder(url)

无法检索文件夹内容:

带有 url: https://drive.google.com/drive/folders/135hTTURfjn43fo4f?usp=sharing 的 gdrive 文件夹至少有 50 个文件,gdrive 不能下载超过这个限制,如果你没问题,请再次运行 --remaining-ok 标志。

stack overflow Downloading a Large Folder From Google-Drive Using Python
原文答案

答案:

作者头像

正如 kite 在评论中提到的那样,将它与 remaining_ok 标志一起使用。

gdown.download_folder(url, remaining_ok=True)

https://pypi.org/project/gdown/ 中没有提到这一点,因此可能会有任何混淆。

除了警告和 remaining_ok 之外,没有任何关于 this github code. 的引用

### 编辑:

似乎 gdown 严格限制为 50 个文件,并且没有找到规避它的方法。

如果不是 gdown 是一个选项,请参见下面的代码。

### 脚本:

import io
import os
import os.path
from googleapiclient.discovery import build
from googleapiclient.http import MediaIoBaseDownload
from google.oauth2 import service_account

credential_json = {
    ### Create a service account and use its the json content here ###
    ### https://cloud.google.com/docs/authentication/getting-started#creating_a_service_account
    ### credentials.json looks like this:
    "type": "service_account",
    "project_id": "*********",
    "private_key_id": "*********",
    "private_key": "-----BEGIN PRIVATE KEY-----n*********n-----END PRIVATE KEY-----n",
    "client_email": "service-account@*********.iam.gserviceaccount.com",
    "client_id": "*********",
    "auth_uri": "https://accounts.google.com/o/oauth2/auth",
    "token_uri": "https://oauth2.googleapis.com/token",
    "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
    "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/service-account%40*********.iam.gserviceaccount.com"
}

credentials = service_account.Credentials.from_service_account_info(credential_json)
drive_service = build('drive', 'v3', credentials=credentials)

folderId = '### Google Drive Folder ID ###'
outputFolder = 'output'

# Create folder if not existing
if not os.path.isdir(outputFolder):
    os.mkdir(outputFolder)

items = []
pageToken = ""
while pageToken is not None:
    response = drive_service.files().list(q="'" + folderId + "' in parents", pageSize=1000, pageToken=pageToken,
                                          fields="nextPageToken, files(id, name)").execute()
    items.extend(response.get('files', []))
    pageToken = response.get('nextPageToken')

for file in items:
    file_id = file['id']
    file_name = file['name']
    request = drive_service.files().get_media(fileId=file_id)
    ### Saves all files under outputFolder
    fh = io.FileIO(outputFolder + '/' + file_name, 'wb')
    downloader = MediaIoBaseDownload(fh, request)
    done = False
    while done is False:
        status, done = downloader.next_chunk()
        print(f'{file_name} downloaded completely.')

### 参考:

作者头像
!pip uninstall --yes gdown # After running this line, restart Colab runtime.
!pip install gdown -U --no-cache-dir
import gdown

url = r'https://drive.google.com/drive/folders/1sWD6urkwyZo8ZyZBJoJw40eKK0jDNEni'
gdown.download_folder(url)