Guest User

Untitled

a guest
Oct 14th, 2016
152
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 135.18 KB | None | 0 0
  1. rclone(1) User Manual
  2.  
  3. Nick Craig-Wood
  4.  
  5. Aug 24, 2016
  6.  
  7. Rclone
  8.  
  9. Logo
  10.  
  11. Rclone is a command line program to sync files and directories to and from
  12.  
  13. Google Drive
  14. Amazon S3
  15. Openstack Swift / Rackspace cloud files / Memset Memstore
  16. Dropbox
  17. Google Cloud Storage
  18. Amazon Drive
  19. Microsoft One Drive
  20. Hubic
  21. Backblaze B2
  22. Yandex Disk
  23. The local filesystem
  24. Features
  25.  
  26. MD5/SHA1 hashes checked at all times for file integrity
  27. Timestamps preserved on files
  28. Partial syncs supported on a whole file basis
  29. Copy mode to just copy new/changed files
  30. Sync (one way) mode to make a directory identical
  31. Check mode to check for file hash equality
  32. Can sync to and from network, eg two different cloud accounts
  33. Optional encryption (Crypt)
  34. Optional FUSE mount (rclone mount)
  35. Links
  36.  
  37. Home page
  38. Github project page for source and bug tracker
  39. Google+ page
  40. Downloads
  41. Install
  42.  
  43. Rclone is a Go program and comes as a single binary file.
  44.  
  45. Download the relevant binary.
  46.  
  47. Or alternatively if you have Go 1.5+ installed use
  48.  
  49. go get github.com/ncw/rclone
  50. and this will build the binary in $GOPATH/bin. If you have built rclone before then you will want to update its dependencies first with this
  51.  
  52. go get -u -v github.com/ncw/rclone/...
  53. See the Usage section of the docs for how to use rclone, or run rclone -h.
  54.  
  55. linux binary downloaded files install example
  56.  
  57. unzip rclone-v1.17-linux-amd64.zip
  58. cd rclone-v1.17-linux-amd64
  59. #copy binary file
  60. sudo cp rclone /usr/sbin/
  61. sudo chown root:root /usr/sbin/rclone
  62. sudo chmod 755 /usr/sbin/rclone
  63. #install manpage
  64. sudo mkdir -p /usr/local/share/man/man1
  65. sudo cp rclone.1 /usr/local/share/man/man1/
  66. sudo mandb
  67. Installation with Ansible
  68.  
  69. This can be done with Stefan Weichinger's ansible role.
  70.  
  71. Instructions
  72.  
  73. git clone https://github.com/stefangweichinger/ansible-rclone.git into your local roles-directory
  74. add the role to the hosts you want rclone installed to:
  75. - hosts: rclone-hosts
  76. roles:
  77. - rclone
  78. Configure
  79.  
  80. First you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file .rclone.conf in your home directory by default. (You can use the --config option to choose a different config file.)
  81.  
  82. The easiest way to make the config is to run rclone with the config option:
  83.  
  84. rclone config
  85. See the following for detailed instructions for
  86.  
  87. Google drive
  88. Amazon S3
  89. Swift / Rackspace Cloudfiles / Memset Memstore
  90. Dropbox
  91. Google Cloud Storage
  92. Local filesystem
  93. Amazon Drive
  94. Backblaze B2
  95. Hubic
  96. Microsoft One Drive
  97. Yandex Disk
  98. Crypt - to encrypt other remotes
  99. Usage
  100.  
  101. Rclone syncs a directory tree from one storage system to another.
  102.  
  103. Its syntax is like this
  104.  
  105. Syntax: [options] subcommand <parameters> <parameters...>
  106. Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg "drive:myfolder" to look at "myfolder" in Google drive.
  107.  
  108. You can define as many storage paths as you like in the config file.
  109.  
  110. Subcommands
  111.  
  112. rclone uses a system of subcommands. For example
  113.  
  114. rclone ls remote:path # lists a re
  115. rclone copy /local/path remote:path # copies /local/path to the remote
  116. rclone sync /local/path remote:path # syncs /local/path to the remote
  117. rclone config
  118.  
  119. Enter an interactive configuration session.
  120.  
  121. Synopsis
  122.  
  123. Enter an interactive configuration session.
  124.  
  125. rclone config
  126. rclone copy
  127.  
  128. Copy files from source to dest, skipping already copied
  129.  
  130. Synopsis
  131.  
  132. Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination.
  133.  
  134. Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.
  135.  
  136. If dest:path doesn't exist, it is created and the source:path contents go there.
  137.  
  138. For example
  139.  
  140. rclone copy source:sourcepath dest:destpath
  141. Let's say there are two files in sourcepath
  142.  
  143. sourcepath/one.txt
  144. sourcepath/two.txt
  145. This copies them to
  146.  
  147. destpath/one.txt
  148. destpath/two.txt
  149. Not to
  150.  
  151. destpath/sourcepath/one.txt
  152. destpath/sourcepath/two.txt
  153. If you are familiar with rsync, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination.
  154.  
  155. See the --no-traverse option for controlling whether rclone lists the destination directory or not.
  156.  
  157. rclone copy source:path dest:path
  158. rclone sync
  159.  
  160. Make source and dest identical, modifying destination only.
  161.  
  162. Synopsis
  163.  
  164. Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.
  165.  
  166. Important: Since this can cause data loss, test first with the --dry-run flag to see exactly what would be copied and deleted.
  167.  
  168. Note that files in the destination won't be deleted if there were any errors at any point.
  169.  
  170. It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy command above if unsure.
  171.  
  172. If dest:path doesn't exist, it is created and the source:path contents go there.
  173.  
  174. rclone sync source:path dest:path
  175. rclone move
  176.  
  177. Move files from source to dest.
  178.  
  179. Synopsis
  180.  
  181. Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap.
  182.  
  183. If no filters are in use and if possible this will server side move source:path into dest:path. After this source:path will no longer longer exist.
  184.  
  185. Otherwise for each file in source:path selected by the filters (if any) this will move it into dest:path. If possible a server side move will be used, otherwise it will copy it (server side if possible) into dest:path then delete the original (if no errors on copy) in source:path.
  186.  
  187. Important: Since this can cause data loss, test first with the --dry-run flag.
  188.  
  189. rclone move source:path dest:path
  190. rclone delete
  191.  
  192. Remove the contents of path.
  193.  
  194. Synopsis
  195.  
  196. Remove the contents of path. Unlike purge it obeys include/exclude filters so can be used to selectively delete files.
  197.  
  198. Eg delete all files bigger than 100MBytes
  199.  
  200. Check what would be deleted first (use either)
  201.  
  202. rclone --min-size 100M lsl remote:path
  203. rclone --dry-run --min-size 100M delete remote:path
  204. Then delete
  205.  
  206. rclone --min-size 100M delete remote:path
  207. That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes.
  208.  
  209. rclone delete remote:path
  210. rclone purge
  211.  
  212. Remove the path and all of its contents.
  213.  
  214. Synopsis
  215.  
  216. Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use delete if you want to selectively delete files.
  217.  
  218. rclone purge remote:path
  219. rclone mkdir
  220.  
  221. Make the path if it doesn't already exist.
  222.  
  223. Synopsis
  224.  
  225. Make the path if it doesn't already exist.
  226.  
  227. rclone mkdir remote:path
  228. rclone rmdir
  229.  
  230. Remove the path if empty.
  231.  
  232. Synopsis
  233.  
  234. Remove the path. Note that you can't remove a path with objects in it, use purge for that.
  235.  
  236. rclone rmdir remote:path
  237. rclone check
  238.  
  239. Checks the files in the source and destination match.
  240.  
  241. Synopsis
  242.  
  243. Checks the files in the source and destination match. It compares sizes and MD5SUMs and prints a report of files which don't match. It doesn't alter the source or destination.
  244.  
  245. --size-only may be used to only compare the sizes, not the MD5SUMs.
  246.  
  247. rclone check source:path dest:path
  248. rclone ls
  249.  
  250. List all the objects in the the path with size and path.
  251.  
  252. Synopsis
  253.  
  254. List all the objects in the the path with size and path.
  255.  
  256. rclone ls remote:path
  257. rclone lsd
  258.  
  259. List all directories/containers/buckets in the the path.
  260.  
  261. Synopsis
  262.  
  263. List all directories/containers/buckets in the the path.
  264.  
  265. rclone lsd remote:path
  266. rclone lsl
  267.  
  268. List all the objects path with modification time, size and path.
  269.  
  270. Synopsis
  271.  
  272. List all the objects path with modification time, size and path.
  273.  
  274. rclone lsl remote:path
  275. rclone md5sum
  276.  
  277. Produces an md5sum file for all the objects in the path.
  278.  
  279. Synopsis
  280.  
  281. Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces.
  282.  
  283. rclone md5sum remote:path
  284. rclone sha1sum
  285.  
  286. Produces an sha1sum file for all the objects in the path.
  287.  
  288. Synopsis
  289.  
  290. Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces.
  291.  
  292. rclone sha1sum remote:path
  293. rclone size
  294.  
  295. Prints the total size and number of objects in remote:path.
  296.  
  297. Synopsis
  298.  
  299. Prints the total size and number of objects in remote:path.
  300.  
  301. rclone size remote:path
  302. rclone version
  303.  
  304. Show the version number.
  305.  
  306. Synopsis
  307.  
  308. Show the version number.
  309.  
  310. rclone version
  311. rclone cleanup
  312.  
  313. Clean up the remote if possible
  314.  
  315. Synopsis
  316.  
  317. Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes.
  318.  
  319. rclone cleanup remote:path
  320. rclone dedupe
  321.  
  322. Interactively find duplicate files delete/rename them.
  323.  
  324. Synopsis
  325.  
  326. By default dedup interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names.
  327.  
  328. The dedupe command will delete all but one of any identical (same md5sum) files it finds without confirmation. This means that for most duplicated files the dedupe command will not be interactive. You can use --dry-run to see what would happen without doing anything.
  329.  
  330. Here is an example run.
  331.  
  332. Before - with duplicates
  333.  
  334. $ rclone lsl drive:dupes
  335. 6048320 2016-03-05 16:23:16.798000000 one.txt
  336. 6048320 2016-03-05 16:23:11.775000000 one.txt
  337. 564374 2016-03-05 16:23:06.731000000 one.txt
  338. 6048320 2016-03-05 16:18:26.092000000 one.txt
  339. 6048320 2016-03-05 16:22:46.185000000 two.txt
  340. 1744073 2016-03-05 16:22:38.104000000 two.txt
  341. 564374 2016-03-05 16:22:52.118000000 two.txt
  342. Now the dedupe session
  343.  
  344. $ rclone dedupe drive:dupes
  345. 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
  346. one.txt: Found 4 duplicates - deleting identical copies
  347. one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
  348. one.txt: 2 duplicates remain
  349. 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
  350. 2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
  351. s) Skip and do nothing
  352. k) Keep just one (choose which in next step)
  353. r) Rename all to be different (by changing file.jpg to file-1.jpg)
  354. s/k/r> k
  355. Enter the number of the file to keep> 1
  356. one.txt: Deleted 1 extra copies
  357. two.txt: Found 3 duplicates - deleting identical copies
  358. two.txt: 3 duplicates remain
  359. 1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
  360. 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
  361. 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
  362. s) Skip and do nothing
  363. k) Keep just one (choose which in next step)
  364. r) Rename all to be different (by changing file.jpg to file-1.jpg)
  365. s/k/r> r
  366. two-1.txt: renamed from: two.txt
  367. two-2.txt: renamed from: two.txt
  368. two-3.txt: renamed from: two.txt
  369. The result being
  370.  
  371. $ rclone lsl drive:dupes
  372. 6048320 2016-03-05 16:23:16.798000000 one.txt
  373. 564374 2016-03-05 16:22:52.118000000 two-1.txt
  374. 6048320 2016-03-05 16:22:46.185000000 two-2.txt
  375. 1744073 2016-03-05 16:22:38.104000000 two-3.txt
  376. Dedupe can be run non interactively using the --dedupe-mode flag or by using an extra parameter with the same value
  377.  
  378. --dedupe-mode interactive - interactive as above.
  379. --dedupe-mode skip - removes identical files then skips anything left.
  380. --dedupe-mode first - removes identical files then keeps the first one.
  381. --dedupe-mode newest - removes identical files then keeps the newest one.
  382. --dedupe-mode oldest - removes identical files then keeps the oldest one.
  383. --dedupe-mode rename - removes identical files then renames the rest to be different.
  384. For example to rename all the identically named photos in your Google Photos directory, do
  385.  
  386. rclone dedupe --dedupe-mode rename "drive:Google Photos"
  387. Or
  388.  
  389. rclone dedupe rename "drive:Google Photos"
  390. rclone dedupe [mode] remote:path
  391. Options
  392.  
  393. --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename.
  394. rclone authorize
  395.  
  396. Remote authorization.
  397.  
  398. Synopsis
  399.  
  400. Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.
  401.  
  402. rclone authorize
  403. rclone cat
  404.  
  405. Concatenates any files and sends them to stdout.
  406.  
  407. Synopsis
  408.  
  409. rclone cat sends any files to standard output.
  410.  
  411. You can use it like this to output a single file
  412.  
  413. rclone cat remote:path/to/file
  414. Or like this to output any file in dir or subdirectories.
  415.  
  416. rclone cat remote:path/to/dir
  417. Or like this to output any .txt files in dir or subdirectories.
  418.  
  419. rclone --include "*.txt" cat remote:path/to/dir
  420. rclone cat remote:path
  421. rclone genautocomplete
  422.  
  423. Output bash completion script for rclone.
  424.  
  425. Synopsis
  426.  
  427. Generates a bash shell autocompletion script for rclone.
  428.  
  429. This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg
  430.  
  431. sudo rclone genautocomplete
  432. Logout and login again to use the autocompletion scripts, or source them directly
  433.  
  434. . /etc/bash_completion
  435. If you supply a command line argument the script will be written there.
  436.  
  437. rclone genautocomplete [output_file]
  438. rclone gendocs
  439.  
  440. Output markdown docs for rclone to the directory supplied.
  441.  
  442. Synopsis
  443.  
  444. This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
  445.  
  446. rclone gendocs output_directory
  447. rclone mount
  448.  
  449. Mount the remote as a mountpoint. EXPERIMENTAL
  450.  
  451. Synopsis
  452.  
  453. rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's cloud storage systems as a file system with FUSE.
  454.  
  455. This is EXPERIMENTAL - use with care.
  456.  
  457. First set up your remote using rclone config. Check it works with rclone ls etc.
  458.  
  459. Start the mount like this
  460.  
  461. rclone mount remote:path/to/files /path/to/local/mount &
  462. Stop the mount with
  463.  
  464. fusermount -u /path/to/local/mount
  465. Or with OS X
  466.  
  467. umount -u /path/to/local/mount
  468. Limitations
  469.  
  470. This can only read files seqentially, or write files sequentially. It can't read and write or seek in files.
  471.  
  472. rclonefs inherits rclone's directory handling. In rclone's world directories don't really exist. This means that empty directories will have a tendency to disappear once they fall out of the directory cache.
  473.  
  474. The bucket based FSes (eg swift, s3, google compute storage, b2) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift: won't work whereas swift:bucket will as will swift:bucket/path.
  475.  
  476. Only supported on Linux, FreeBSD and OS X at the moment.
  477.  
  478. rclone mount vs rclone sync/copy
  479.  
  480. File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. This might happen in the future, but for the moment rclone mount won't do that, so will be less reliable than the rclone command.
  481.  
  482. Bugs
  483.  
  484. All the remotes should work for read, but some may not for write
  485. those which need to know the size in advance won't - eg B2
  486. maybe should pass in size as -1 to mean work it out
  487. TODO
  488.  
  489. Check hashes on upload/download
  490. Preserve timestamps
  491. Move directories
  492. rclone mount remote:path /path/to/mountpoint
  493. Options
  494.  
  495. --debug-fuse Debug the FUSE internals - needs -v.
  496. --no-modtime Don't read the modification time (can speed things up).
  497. Copying single files
  498.  
  499. rclone normally syncs or copies directories. However if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory if it isn't.
  500.  
  501. For example, suppose you have a remote with a file in called test.jpg, then you could copy just that file like this
  502.  
  503. rclone copy remote:test.jpg /tmp/download
  504. The file test.jpg will be placed inside /tmp/download.
  505.  
  506. This is equivalent to specifying
  507.  
  508. rclone copy --no-traverse --files-from /tmp/files remote: /tmp/download
  509. Where /tmp/files contains the single line
  510.  
  511. test.jpg
  512. It is recommended to use copy when copying single files not sync. They have pretty much the same effect but copy will use a lot less memory.
  513.  
  514. Quoting and the shell
  515.  
  516. When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.
  517.  
  518. Here are some gotchas which may help users unfamiliar with the shell rules
  519.  
  520. Linux / OSX
  521.  
  522. If your names have spaces or shell metacharacters (eg *, ?, $, ', " etc) then you must quote them. Use single quotes ' by default.
  523.  
  524. rclone copy 'Important files?' remote:backup
  525. If you want to send a ' you will need to use ", eg
  526.  
  527. rclone copy "O'Reilly Reviews" remote:backup
  528. The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell.
  529.  
  530. Windows
  531.  
  532. If your names have spaces in you need to put them in ", eg
  533.  
  534. rclone copy "E:\folder name\folder name\folder name" remote:backup
  535. If you are using the root directory on its own then don't quote it (see #464 for why), eg
  536.  
  537. rclone copy E:\ remote:backup
  538. Server Side Copy
  539.  
  540. Drive, S3, Dropbox, Swift and Google Cloud Storage support server side copy.
  541.  
  542. This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place.
  543.  
  544. Eg
  545.  
  546. rclone copy s3:oldbucket s3:newbucket
  547. Will copy the contents of oldbucket to newbucket without downloading and re-uploading.
  548.  
  549. Remotes which don't support server side copy (eg local) will download and re-upload in this case.
  550.  
  551. Server side copies are used with sync and copy and will be identified in the log when using the -v flag.
  552.  
  553. Server side copies will only be attempted if the remote names are the same.
  554.  
  555. This can be used when scripting to make aged backups efficiently, eg
  556.  
  557. rclone sync remote:current-backup remote:previous-backup
  558. rclone sync /path/to/files remote:current-backup
  559. Options
  560.  
  561. Rclone has a number of options to control its behaviour.
  562.  
  563. Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "�s"), "ms", "s", "m", "h".
  564.  
  565. Options which use SIZE use kByte by default. However a suffix of b for bytes, k for kBytes, M for MBytes and G for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
  566.  
  567. --bwlimit=SIZE
  568.  
  569. Bandwidth limit in kBytes/s, or use suffix b|k|M|G. The default is 0 which means to not limit bandwidth.
  570.  
  571. For example to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
  572.  
  573. This only limits the bandwidth of the data transfer, it doesn't limit the bandwith of the directory listings etc.
  574.  
  575. --checkers=N
  576.  
  577. The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg s3, swift, dropbox) this can take a significant amount of time so they are run in parallel.
  578.  
  579. The default is to run 8 checkers in parallel.
  580.  
  581. -c, --checksum
  582.  
  583. Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal.
  584.  
  585. This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size.
  586.  
  587. This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section.
  588.  
  589. Eg rclone --checksum sync s3:/bucket swift:/bucket would run much quicker than without the --checksum flag.
  590.  
  591. When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally.
  592.  
  593. --config=CONFIG_FILE
  594.  
  595. Specify the location of the rclone config file. Normally this is in your home directory as a file called .rclone.conf. If you run rclone -h and look at the help for the --config option you will see where the default location is for you. Use this flag to override the config location, eg rclone --config=".myconfig" .config.
  596.  
  597. --contimeout=TIME
  598.  
  599. Set the connection timeout. This should be in go time format which looks like 5s for 5 seconds, 10m for 10 minutes, or 3h30m.
  600.  
  601. The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is 1m by default.
  602.  
  603. --dedupe-mode MODE
  604.  
  605. Mode to run dedupe command in. One of interactive, skip, first, newest, oldest, rename. The default is interactive. See the dedupe command for more information as to what these options mean.
  606.  
  607. -n, --dry-run
  608.  
  609. Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the sync command which deletes files in the destination.
  610.  
  611. --ignore-existing
  612.  
  613. Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.
  614.  
  615. While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.
  616.  
  617. --ignore-size
  618.  
  619. Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If --checksum is set then it only checks the checksum.
  620.  
  621. It will also cause rclone to skip verifying the sizes are the same after transfer.
  622.  
  623. This can be useful for transferring files to and from onedrive which occasionally misreports the size of image files (see #399 for more info).
  624.  
  625. -I, --ignore-times
  626.  
  627. Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination.
  628.  
  629. Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using --checksum).
  630.  
  631. --log-file=FILE
  632.  
  633. Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info.
  634.  
  635. --low-level-retries NUMBER
  636.  
  637. This controls the number of low level retries rclone does.
  638.  
  639. A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v flag.
  640.  
  641. This shouldn't need to be changed from the default in normal operations, however if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries flag) quicker.
  642.  
  643. Disable low level retries with --low-level-retries 1.
  644.  
  645. --max-depth=N
  646.  
  647. This modifies the recursion depth for all the commands except purge.
  648.  
  649. So if you do rclone --max-depth 1 ls remote:path you will see only the files in the top level directory. Using --max-depth 2 means you will see all the files in first two directory levels and so on.
  650.  
  651. For historical reasons the lsd command defaults to using a --max-depth of 1 - you can override this with the command line flag.
  652.  
  653. You can use this command to disable recursion (with --max-depth 1).
  654.  
  655. Note that if you use this with sync and --delete-excluded the files not recursed through are considered excluded and will be deleted on the destination. Test first with --dry-run if you are not sure what will happen.
  656.  
  657. --modify-window=TIME
  658.  
  659. When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent.
  660.  
  661. The default is 1ns unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be 1s by default.
  662.  
  663. This command line flag allows you to override that computed default.
  664.  
  665. --no-gzip-encoding
  666.  
  667. Don't set Accept-Encoding: gzip. This means that rclone won't ask the server for compressed files automatically. Useful if you've set the server to return files with Content-Encoding: gzip but you uploaded compressed files.
  668.  
  669. There is no need to set this in normal operation, and doing so will decrease the network transfer efficiency of rclone.
  670.  
  671. --no-update-modtime
  672.  
  673. When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally.
  674.  
  675. This can be used if the remote is being synced with another tool also (eg the Google Drive client).
  676.  
  677. -q, --quiet
  678.  
  679. Normally rclone outputs stats and a completion message. If you set this flag it will make as little output as possible.
  680.  
  681. --retries int
  682.  
  683. Retry the entire sync if it fails this many times it fails (default 3).
  684.  
  685. Some remotes can be unreliable and a few retries helps pick up the files which didn't get transferred because of errors.
  686.  
  687. Disable retries with --retries 1.
  688.  
  689. --size-only
  690.  
  691. Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size.
  692.  
  693. This can be useful transferring files from dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.
  694.  
  695. --stats=TIME
  696.  
  697. Rclone will print stats at regular intervals to show its progress.
  698.  
  699. This sets the interval.
  700.  
  701. The default is 1m. Use 0 to disable.
  702.  
  703. --delete-(before,during,after)
  704.  
  705. This option allows you to specify when files on your destination are deleted when you sync folders.
  706.  
  707. Specifying the value --delete-before will delete all files present on the destination, but not on the source before starting the transfer of any new or updated files. This uses extra memory as it has to store the source listing before proceeding.
  708.  
  709. Specifying --delete-during (default value) will delete files while checking and uploading files. This is usually the fastest option. Currently this works the same as --delete-after but it may change in the future.
  710.  
  711. Specifying --delete-after will delay deletion of files until all new/updated files have been successfully transfered.
  712.  
  713. --timeout=TIME
  714.  
  715. This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected.
  716.  
  717. The default is 5m. Set to 0 to disable.
  718.  
  719. --transfers=N
  720.  
  721. The number of file transfers to run in parallel. It can sometimes be useful to set this to a smaller number if the remote is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote.
  722.  
  723. The default is to run 4 file transfers in parallel.
  724.  
  725. -u, --update
  726.  
  727. This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file.
  728.  
  729. If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different.
  730.  
  731. On remotes which don't support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remoes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.
  732.  
  733. This can be useful when transferring to a remote which doesn't support mod times directly as it is more accurate than a --size-only check and faster than using --checksum.
  734.  
  735. -v, --verbose
  736.  
  737. If you set this flag, rclone will become very verbose telling you about every file it considers and transfers.
  738.  
  739. Very useful for debugging.
  740.  
  741. -V, --version
  742.  
  743. Prints the version number
  744.  
  745. Configuration Encryption
  746.  
  747. Your configuration file contains information for logging in to your cloud services. This means that you should keep your .rclone.conf file in a secure location.
  748.  
  749. If you are in an environment where that isn't possible, you can add a password to your configuration. This means that you will have to enter the password every time you start rclone.
  750.  
  751. To add a password to your rclone configuration, execute rclone config.
  752.  
  753. >rclone config
  754. Current remotes:
  755.  
  756. e) Edit existing remote
  757. n) New remote
  758. d) Delete remote
  759. s) Set configuration password
  760. q) Quit config
  761. e/n/d/s/q>
  762. Go into s, Set configuration password:
  763.  
  764. e/n/d/s/q> s
  765. Your configuration is not encrypted.
  766. If you add a password, you will protect your login information to cloud services.
  767. a) Add Password
  768. q) Quit to main menu
  769. a/q> a
  770. Enter NEW configuration password:
  771. password:
  772. Confirm NEW password:
  773. password:
  774. Password set
  775. Your configuration is encrypted.
  776. c) Change Password
  777. u) Unencrypt configuration
  778. q) Quit to main menu
  779. c/u/q>
  780. Your configuration is now encrypted, and every time you start rclone you will now be asked for the password. In the same menu you can change the password or completely remove encryption from your configuration.
  781.  
  782. There is no way to recover the configuration if you lose your password.
  783.  
  784. rclone uses nacl secretbox which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored.
  785.  
  786. While this provides very good security, we do not recommend storing your encrypted rclone configuration in public if it contains sensitive information, maybe except if you use a very strong password.
  787.  
  788. If it is safe in your environment, you can set the RCLONE_CONFIG_PASS environment variable to contain your password, in which case it will be used for decrypting the configuration.
  789.  
  790. If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter --ask-password=false to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS doesn't contain a valid password.
  791.  
  792. Developer options
  793.  
  794. These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg --drive-test-option - see the docs for the remote in question.
  795.  
  796. --cpuprofile=FILE
  797.  
  798. Write CPU profile to file. This can be analysed with go tool pprof.
  799.  
  800. --dump-bodies
  801.  
  802. Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only.
  803.  
  804. --dump-filters
  805.  
  806. Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.
  807.  
  808. --dump-headers
  809.  
  810. Dump HTTP headers - may contain sensitive info. Can be very verbose. Useful for debugging only.
  811.  
  812. --memprofile=FILE
  813.  
  814. Write memory profile to file. This can be analysed with go tool pprof.
  815.  
  816. --no-check-certificate=true/false
  817.  
  818. --no-check-certificate controls whether a client verifies the server's certificate chain and host name. If --no-check-certificate is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.
  819.  
  820. This option defaults to false.
  821.  
  822. This should be used only for testing.
  823.  
  824. --no-traverse
  825.  
  826. The --no-traverse flag controls whether the destination file system is traversed when using the copy or move commands.
  827.  
  828. If you are only copying a small number of files and/or have a large number of files on the destination then --no-traverse will stop rclone listing the destination and save time.
  829.  
  830. However if you are copying a large number of files, escpecially if you are doing a copy where lots of the files haven't changed and won't need copying then you shouldn't use --no-traverse.
  831.  
  832. It can also be used to reduce the memory usage of rclone when copying - rclone --no-traverse copy src dst won't load either the source or destination listings into memory so will use the minimum amount of memory.
  833.  
  834. Filtering
  835.  
  836. For the filtering options
  837.  
  838. --delete-excluded
  839. --filter
  840. --filter-from
  841. --exclude
  842. --exclude-from
  843. --include
  844. --include-from
  845. --files-from
  846. --min-size
  847. --max-size
  848. --min-age
  849. --max-age
  850. --dump-filters
  851. See the filtering section.
  852.  
  853. Logging
  854.  
  855. rclone has 3 levels of logging, Error, Info and Debug.
  856.  
  857. By default rclone logs Error and Info to standard error and Debug to standard output. This means you can redirect standard output and standard error to different places.
  858.  
  859. By default rclone will produce Error and Info level messages.
  860.  
  861. If you use the -q flag, rclone will only produce Error messages.
  862.  
  863. If you use the -v flag, rclone will produce Error, Info and Debug messages.
  864.  
  865. If you use the --log-file=FILE option, rclone will redirect Error, Info and Debug messages along with standard error to FILE.
  866.  
  867. Exit Code
  868.  
  869. If any errors occurred during the command, rclone will set a non zero exit code. This allows scripts to detect when rclone operations have failed.
  870.  
  871. Configuring rclone on a remote / headless machine
  872.  
  873. Some of the configurations (those involving oauth2) require an Internet connected web browser.
  874.  
  875. If you are trying to set rclone up on a remote or headless box with no browser available on it (eg a NAS or a server in a datacenter) then you will need to use an alternative means of configuration. There are two ways of doing it, described below.
  876.  
  877. Configuring using rclone authorize
  878.  
  879. On the headless box
  880.  
  881. ...
  882. Remote config
  883. Use auto config?
  884. * Say Y if not sure
  885. * Say N if you are working on a remote or headless machine
  886. y) Yes
  887. n) No
  888. y/n> n
  889. For this to work, you will need rclone available on a machine that has a web browser available.
  890. Execute the following on your machine:
  891. rclone authorize "amazon cloud drive"
  892. Then paste the result below:
  893. result>
  894. Then on your main desktop machine
  895.  
  896. rclone authorize "amazon cloud drive"
  897. If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
  898. Log in and authorize rclone for access
  899. Waiting for code...
  900. Got code
  901. Paste the following into your remote machine --->
  902. SECRET_TOKEN
  903. <---End paste
  904. Then back to the headless box, paste in the code
  905.  
  906. result> SECRET_TOKEN
  907. --------------------
  908. [acd12]
  909. client_id =
  910. client_secret =
  911. token = SECRET_TOKEN
  912. --------------------
  913. y) Yes this is OK
  914. e) Edit this remote
  915. d) Delete this remote
  916. y/e/d>
  917. Configuring by copying the config file
  918.  
  919. Rclone stores all of its config in a single configuration file. This can easily be copied to configure a remote rclone.
  920.  
  921. So first configure rclone on your desktop machine
  922.  
  923. rclone config
  924. to set up the config file.
  925.  
  926. Find the config file by running rclone -h and looking for the help for the --config option
  927.  
  928. $ rclone -h
  929. [snip]
  930. --config="/home/user/.rclone.conf": Config file.
  931. [snip]
  932. Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and place it in the correct place (use rclone -h on the remote box to find out where).
  933.  
  934. Filtering, includes and excludes
  935.  
  936. Rclone has a sophisticated set of include and exclude rules. Some of these are based on patterns and some on other things like file size.
  937.  
  938. The filters are applied for the copy, sync, move, ls, lsl, md5sum, sha1sum, size, delete and check operations. Note that purge does not obey the filters.
  939.  
  940. Each path as it passes through rclone is matched against the include and exclude rules like --include, --exclude, --include-from, --exclude-from, --filter, or --filter-from. The simplest way to try them out is using the ls command, or --dry-run together with -v.
  941.  
  942. Important Due to limitations of the command line parser you can only use any of these options once - if you duplicate them then rclone will use the last one only.
  943.  
  944. Patterns
  945.  
  946. The patterns used to match files for inclusion or exclusion are based on "file globs" as used by the unix shell.
  947.  
  948. If the pattern starts with a / then it only matches at the top level of the directory tree, relative to the root of the remote. If it doesn't start with / then it is matched starting at the end of the path, but it will only match a complete path element:
  949.  
  950. file.jpg - matches "file.jpg"
  951. - matches "directory/file.jpg"
  952. - doesn't match "afile.jpg"
  953. - doesn't match "directory/afile.jpg"
  954. /file.jpg - matches "file.jpg" in the root directory of the remote
  955. - doesn't match "afile.jpg"
  956. - doesn't match "directory/file.jpg"
  957. Important Note that you must use / in patterns and not \ even if running on Windows.
  958.  
  959. A * matches anything but not a /.
  960.  
  961. *.jpg - matches "file.jpg"
  962. - matches "directory/file.jpg"
  963. - doesn't match "file.jpg/something"
  964. Use ** to match anything, including slashes (/).
  965.  
  966. dir/** - matches "dir/file.jpg"
  967. - matches "dir/dir1/dir2/file.jpg"
  968. - doesn't match "directory/file.jpg"
  969. - doesn't match "adir/file.jpg"
  970. A ? matches any character except a slash /.
  971.  
  972. l?ss - matches "less"
  973. - matches "lass"
  974. - doesn't match "floss"
  975. A [ and ] together make a a character class, such as [a-z] or [aeiou] or [[:alpha:]]. See the go regexp docs for more info on these.
  976.  
  977. h[ae]llo - matches "hello"
  978. - matches "hallo"
  979. - doesn't match "hullo"
  980. A { and } define a choice between elements. It should contain a comma seperated list of patterns, any of which might match. These patterns can contain wildcards.
  981.  
  982. {one,two}_potato - matches "one_potato"
  983. - matches "two_potato"
  984. - doesn't match "three_potato"
  985. - doesn't match "_potato"
  986. Special characters can be escaped with a \ before them.
  987.  
  988. \*.jpg - matches "*.jpg"
  989. \\.jpg - matches "\.jpg"
  990. \[one\].jpg - matches "[one].jpg"
  991. Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so rclone copy "remote:dir*.jpg" /path/to/dir won't work - what is required is rclone --include "*.jpg" copy remote:dir /path/to/dir
  992.  
  993. Directories
  994.  
  995. Rclone keeps track of directories that could match any file patterns.
  996.  
  997. Eg if you add the include rule
  998.  
  999. \a\*.jpg
  1000. Rclone will synthesize the directory include rule
  1001.  
  1002. \a\
  1003. If you put any rules which end in \ then it will only match directories.
  1004.  
  1005. Directory matches are only used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory.
  1006.  
  1007. Differences between rsync and rclone patterns
  1008.  
  1009. Rclone implements bash style {a,b,c} glob matching which rsync doesn't.
  1010.  
  1011. Rclone always does a wildcard match so \ must always escape a \.
  1012.  
  1013. How the rules are used
  1014.  
  1015. Rclone maintains a list of include rules and exclude rules.
  1016.  
  1017. Each file is matched in order against the list until it finds a match. The file is then included or excluded according to the rule type.
  1018.  
  1019. If the matcher falls off the bottom of the list then the path is included.
  1020.  
  1021. For example given the following rules, + being include, - being exclude,
  1022.  
  1023. - secret*.jpg
  1024. + *.jpg
  1025. + *.png
  1026. + file2.avi
  1027. - *
  1028. This would include
  1029.  
  1030. file1.jpg
  1031. file3.png
  1032. file2.avi
  1033. This would exclude
  1034.  
  1035. secret17.jpg
  1036. non *.jpg and *.png
  1037. A similar process is done on directory entries before recursing into them. This only works on remotes which have a concept of directory (Eg local, google drive, onedrive, amazon drive) and not on bucket based remotes (eg s3, swift, google compute storage, b2).
  1038.  
  1039. Adding filtering rules
  1040.  
  1041. Filtering rules are added with the following command line flags.
  1042.  
  1043. --exclude - Exclude files matching pattern
  1044.  
  1045. Add a single exclude rule with --exclude.
  1046.  
  1047. Eg --exclude *.bak to exclude all bak files from the sync.
  1048.  
  1049. --exclude-from - Read exclude patterns from file
  1050.  
  1051. Add exclude rules from a file.
  1052.  
  1053. Prepare a file like this exclude-file.txt
  1054.  
  1055. # a sample exclude rule file
  1056. *.bak
  1057. file2.jpg
  1058. Then use as --exclude-from exclude-file.txt. This will sync all files except those ending in bak and file2.jpg.
  1059.  
  1060. This is useful if you have a lot of rules.
  1061.  
  1062. --include - Include files matching pattern
  1063.  
  1064. Add a single include rule with --include.
  1065.  
  1066. Eg --include *.{png,jpg} to include all png and jpg files in the backup and no others.
  1067.  
  1068. This adds an implicit --exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from.
  1069.  
  1070. --include-from - Read include patterns from file
  1071.  
  1072. Add include rules from a file.
  1073.  
  1074. Prepare a file like this include-file.txt
  1075.  
  1076. # a sample include rule file
  1077. *.jpg
  1078. *.png
  1079. file2.avi
  1080. Then use as --include-from include-file.txt. This will sync all jpg, png files and file2.avi.
  1081.  
  1082. This is useful if you have a lot of rules.
  1083.  
  1084. This adds an implicit --exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from.
  1085.  
  1086. --filter - Add a file-filtering rule
  1087.  
  1088. This can be used to add a single include or exclude rule. Include rules start with + and exclude rules start with -. A special rule called ! can be used to clear the existing rules.
  1089.  
  1090. Eg --filter "- *.bak" to exclude all bak files from the sync.
  1091.  
  1092. --filter-from - Read filtering patterns from a file
  1093.  
  1094. Add include/exclude rules from a file.
  1095.  
  1096. Prepare a file like this filter-file.txt
  1097.  
  1098. # a sample exclude rule file
  1099. - secret*.jpg
  1100. + *.jpg
  1101. + *.png
  1102. + file2.avi
  1103. # exclude everything else
  1104. - *
  1105. Then use as --filter-from filter-file.txt. The rules are processed in the order that they are defined.
  1106.  
  1107. This example will include all jpg and png files, exclude any files matching secret*.jpg and include file2.avi. Everything else will be excluded from the sync.
  1108.  
  1109. --files-from - Read list of source-file names
  1110.  
  1111. This reads a list of file names from the file passed in and only these files are transferred. The filtering rules are ignored completely if you use this option.
  1112.  
  1113. Prepare a file like this files-from.txt
  1114.  
  1115. # comment
  1116. file1.jpg
  1117. file2.jpg
  1118. Then use as --files-from files-from.txt. This will only transfer file1.jpg and file2.jpg providing they exist.
  1119.  
  1120. For example, let's say you had a few files you want to back up regularly with these absolute paths:
  1121.  
  1122. /home/user1/important
  1123. /home/user1/dir/file
  1124. /home/user2/stuff
  1125. To copy these you'd find a common subdirectory - in this case /home and put the remaining files in files-from.txt with or without leading /, eg
  1126.  
  1127. user1/important
  1128. user1/dir/file
  1129. user2/stuff
  1130. You could then copy these to a remote like this
  1131.  
  1132. rclone copy --files-from files-from.txt /home remote:backup
  1133. The 3 files will arrive in remote:backup with the paths as in the files-from.txt.
  1134.  
  1135. You could of course choose / as the root too in which case your files-from.txt might look like this.
  1136.  
  1137. /home/user1/important
  1138. /home/user1/dir/file
  1139. /home/user2/stuff
  1140. And you would transfer it like this
  1141.  
  1142. rclone copy --files-from files-from.txt / remote:backup
  1143. In this case there will be an extra home directory on the remote.
  1144.  
  1145. --min-size - Don't transfer any file smaller than this
  1146.  
  1147. This option controls the minimum size file which will be transferred. This defaults to kBytes but a suffix of k, M, or G can be used.
  1148.  
  1149. For example --min-size 50k means no files smaller than 50kByte will be transferred.
  1150.  
  1151. --max-size - Don't transfer any file larger than this
  1152.  
  1153. This option controls the maximum size file which will be transferred. This defaults to kBytes but a suffix of k, M, or G can be used.
  1154.  
  1155. For example --max-size 1G means no files larger than 1GByte will be transferred.
  1156.  
  1157. --max-age - Don't transfer any file older than this
  1158.  
  1159. This option controls the maximum age of files to transfer. Give in seconds or with a suffix of:
  1160.  
  1161. ms - Milliseconds
  1162. s - Seconds
  1163. m - Minutes
  1164. h - Hours
  1165. d - Days
  1166. w - Weeks
  1167. M - Months
  1168. y - Years
  1169. For example --max-age 2d means no files older than 2 days will be transferred.
  1170.  
  1171. --min-age - Don't transfer any file younger than this
  1172.  
  1173. This option controls the minimum age of files to transfer. Give in seconds or with a suffix (see --max-age for list of suffixes)
  1174.  
  1175. For example --min-age 2d means no files younger than 2 days will be transferred.
  1176.  
  1177. --delete-excluded - Delete files on dest excluded from sync
  1178.  
  1179. Important this flag is dangerous - use with --dry-run and -v first.
  1180.  
  1181. When doing rclone sync this will delete any files which are excluded from the sync on the destination.
  1182.  
  1183. If for example you did a sync from A to B without the --min-size 50k flag
  1184.  
  1185. rclone sync A: B:
  1186. Then you repeated it like this with the --delete-excluded
  1187.  
  1188. rclone --min-size 50k --delete-excluded sync A: B:
  1189. This would delete all files on B which are less than 50 kBytes as these are now excluded from the sync.
  1190.  
  1191. Always test first with --dry-run and -v before using this flag.
  1192.  
  1193. --dump-filters - dump the filters to the output
  1194.  
  1195. This dumps the defined filters to the output as regular expressions.
  1196.  
  1197. Useful for debugging.
  1198.  
  1199. Quoting shell metacharacters
  1200.  
  1201. The examples above may not work verbatim in your shell as they have shell metacharacters in them (eg *), and may require quoting.
  1202.  
  1203. Eg linux, OSX
  1204.  
  1205. --include \*.jpg
  1206. --include '*.jpg'
  1207. --include='*.jpg'
  1208. In Windows the expansion is done by the command not the shell so this should work fine
  1209.  
  1210. --include *.jpg
  1211. Overview of cloud storage systems
  1212.  
  1213. Each cloud storage system is slighly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through.
  1214.  
  1215. Features
  1216.  
  1217. Here is an overview of the major features of each cloud storage system.
  1218.  
  1219. Name Hash ModTime Case Insensitive Duplicate Files
  1220. Google Drive MD5 Yes No Yes
  1221. Amazon S3 MD5 Yes No No
  1222. Openstack Swift MD5 Yes No No
  1223. Dropbox - No Yes No
  1224. Google Cloud Storage MD5 Yes No No
  1225. Amazon Drive MD5 No Yes No
  1226. Microsoft One Drive SHA1 Yes Yes No
  1227. Hubic MD5 Yes No No
  1228. Backblaze B2 SHA1 Yes No No
  1229. Yandex Disk MD5 Yes No No
  1230. The local filesystem All Yes Depends No
  1231. Hash
  1232.  
  1233. The cloud storage system supports various hash types of the objects.
  1234. The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum flag in syncs and in the check command.
  1235.  
  1236. To use the checksum checks between filesystems they must support a common hash type.
  1237.  
  1238. ModTime
  1239.  
  1240. The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum flag.
  1241.  
  1242. All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.
  1243.  
  1244. Case Insensitive
  1245.  
  1246. If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg file.txt and FILE.txt. If a cloud storage system is case insensitive then that isn't possible.
  1247.  
  1248. This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.
  1249.  
  1250. The local filesystem may or may not be case sensitive depending on OS.
  1251.  
  1252. Windows - usually case insensitive, though case is preserved
  1253. OSX - usually case insensitive, though it is possible to format case sensitive
  1254. Linux - usually case sensitive, but there are case insensitive file systems (eg FAT formatted USB keys)
  1255. Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems.
  1256.  
  1257. Duplicate files
  1258.  
  1259. If a cloud storage system allows duplicate files then it can have two objects with the same name.
  1260.  
  1261. This confuses rclone greatly when syncing - use the rclone dedupe command to rename or remove duplicates.
  1262.  
  1263. Google Drive
  1264.  
  1265. Paths are specified as drive:path
  1266.  
  1267. Drive paths may be as deep as required, eg drive:directory/subdirectory.
  1268.  
  1269. The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config walks you through it.
  1270.  
  1271. Here is an example of how to make a remote called remote. First run:
  1272.  
  1273. rclone config
  1274. This will guide you through an interactive setup process:
  1275.  
  1276. n) New remote
  1277. d) Delete remote
  1278. q) Quit config
  1279. e/n/d/q> n
  1280. name> remote
  1281. Type of storage to configure.
  1282. Choose a number from below, or type in your own value
  1283. 1 / Amazon Drive
  1284. \ "amazon cloud drive"
  1285. 2 / Amazon S3 (also Dreamhost, Ceph)
  1286. \ "s3"
  1287. 3 / Backblaze B2
  1288. \ "b2"
  1289. 4 / Dropbox
  1290. \ "dropbox"
  1291. 5 / Google Cloud Storage (this is not Google Drive)
  1292. \ "google cloud storage"
  1293. 6 / Google Drive
  1294. \ "drive"
  1295. 7 / Hubic
  1296. \ "hubic"
  1297. 8 / Local Disk
  1298. \ "local"
  1299. 9 / Microsoft OneDrive
  1300. \ "onedrive"
  1301. 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  1302. \ "swift"
  1303. 11 / Yandex Disk
  1304. \ "yandex"
  1305. Storage> 6
  1306. Google Application Client Id - leave blank normally.
  1307. client_id>
  1308. Google Application Client Secret - leave blank normally.
  1309. client_secret>
  1310. Remote config
  1311. Use auto config?
  1312. * Say Y if not sure
  1313. * Say N if you are working on a remote or headless machine or Y didn't work
  1314. y) Yes
  1315. n) No
  1316. y/n> y
  1317. If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
  1318. Log in and authorize rclone for access
  1319. Waiting for code...
  1320. Got code
  1321. --------------------
  1322. [remote]
  1323. client_id =
  1324. client_secret =
  1325. token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
  1326. --------------------
  1327. y) Yes this is OK
  1328. e) Edit this remote
  1329. d) Delete this remote
  1330. y/e/d> y
  1331. Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.
  1332.  
  1333. You can then use it like this,
  1334.  
  1335. List directories in top level of your drive
  1336.  
  1337. rclone lsd remote:
  1338. List all the files in your drive
  1339.  
  1340. rclone ls remote:
  1341. To copy a local directory to a drive directory called backup
  1342.  
  1343. rclone copy /home/source remote:backup
  1344. Modified time
  1345.  
  1346. Google drive stores modification times accurate to 1 ms.
  1347.  
  1348. Revisions
  1349.  
  1350. Google drive stores revisions of files. When you upload a change to an existing file to google drive using rclone it will create a new revision of that file.
  1351.  
  1352. Revisions follow the standard google policy which at time of writing was
  1353.  
  1354. They are deleted after 30 days or 100 revisions (whatever comes first).
  1355. They do not count towards a user storage quota.
  1356. Deleting files
  1357.  
  1358. By default rclone will delete files permanently when requested. If sending them to the trash is required instead then use the --drive-use-trash flag.
  1359.  
  1360. Specific options
  1361.  
  1362. Here are the command line options specific to this cloud storage system.
  1363.  
  1364. --drive-chunk-size=SIZE
  1365.  
  1366. Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB.
  1367.  
  1368. Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.
  1369.  
  1370. Reducing this will reduce memory usage but decrease performance.
  1371.  
  1372. --drive-full-list
  1373.  
  1374. No longer does anything - kept for backwards compatibility.
  1375.  
  1376. --drive-upload-cutoff=SIZE
  1377.  
  1378. File size cutoff for switching to chunked upload. Default is 8 MB.
  1379.  
  1380. --drive-use-trash
  1381.  
  1382. Send files to the trash instead of deleting permanently. Defaults to off, namely deleting files permanently.
  1383.  
  1384. --drive-auth-owner-only
  1385.  
  1386. Only consider files owned by the authenticated user. Requires that --drive-full-list=true (default).
  1387.  
  1388. --drive-formats
  1389.  
  1390. Google documents can only be exported from Google drive. When rclone downloads a Google doc it chooses a format to download depending upon this setting.
  1391.  
  1392. By default the formats are docx,xlsx,pptx,svg which are a sensible default for an editable document.
  1393.  
  1394. When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list.
  1395.  
  1396. If you prefer an archive copy then you might use --drive-formats pdf, or if you prefer openoffice/libreoffice formats you might use --drive-formats ods,odt.
  1397.  
  1398. Note that rclone adds the extension to the google doc, so if it is calles My Spreadsheet on google docs, it will be exported as My Spreadsheet.xlsx or My Spreadsheet.pdf etc.
  1399.  
  1400. Here are the possible extensions with their corresponding mime types.
  1401.  
  1402. Extension Mime Type Description
  1403. csv text/csv Standard CSV format for Spreadsheets
  1404. doc application/msword Micosoft Office Document
  1405. docx application/vnd.openxmlformats-officedocument.wordprocessingml.document Microsoft Office Document
  1406. html text/html An HTML Document
  1407. jpg image/jpeg A JPEG Image File
  1408. ods application/vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet
  1409. ods application/x-vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet
  1410. odt application/vnd.oasis.opendocument.text Openoffice Document
  1411. pdf application/pdf Adobe PDF Format
  1412. png image/png PNG Image Format
  1413. pptx application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft Office Powerpoint
  1414. rtf application/rtf Rich Text Format
  1415. svg image/svg+xml Scalable Vector Graphics Format
  1416. txt text/plain Plain Text
  1417. xls application/vnd.ms-excel Microsoft Office Spreadsheet
  1418. xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Microsoft Office Spreadsheet
  1419. zip application/zip A ZIP file of HTML, Images CSS
  1420. Limitations
  1421.  
  1422. Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time.
  1423.  
  1424. Making your own client_id
  1425.  
  1426. When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.
  1427.  
  1428. However you might find you get better performance making your own client_id if you are a heavy user. Or you may not depending on exactly how Google have been raising rclone's rate limit.
  1429.  
  1430. Here is how to create your own Google Drive client ID for rclone:
  1431.  
  1432. Log into the Google API Console with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access)
  1433.  
  1434. Select a project or create a new project.
  1435.  
  1436. Under Overview, Google APIs, Google Apps APIs, click "Drive API", then "Enable".
  1437.  
  1438. Click "Credentials" in the left-side panel (not "Go to credentials", which opens the wizard), then "Create credentials", then "OAuth client ID". It will prompt you to set the OAuth consent screen product name, if you haven't set one already.
  1439.  
  1440. Choose an application type of "other", and click "Create". (the default name is fine)
  1441.  
  1442. It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote.
  1443.  
  1444. (Thanks to @balazer on github for these instructions.)
  1445.  
  1446. Amazon S3
  1447.  
  1448. Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.
  1449.  
  1450. Here is an example of making an s3 configuration. First run
  1451.  
  1452. rclone config
  1453. This will guide you through an interactive setup process.
  1454.  
  1455. No remotes found - make a new one
  1456. n) New remote
  1457. s) Set configuration password
  1458. n/s> n
  1459. name> remote
  1460. Type of storage to configure.
  1461. Choose a number from below, or type in your own value
  1462. 1 / Amazon Drive
  1463. \ "amazon cloud drive"
  1464. 2 / Amazon S3 (also Dreamhost, Ceph)
  1465. \ "s3"
  1466. 3 / Backblaze B2
  1467. \ "b2"
  1468. 4 / Dropbox
  1469. \ "dropbox"
  1470. 5 / Google Cloud Storage (this is not Google Drive)
  1471. \ "google cloud storage"
  1472. 6 / Google Drive
  1473. \ "drive"
  1474. 7 / Hubic
  1475. \ "hubic"
  1476. 8 / Local Disk
  1477. \ "local"
  1478. 9 / Microsoft OneDrive
  1479. \ "onedrive"
  1480. 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  1481. \ "swift"
  1482. 11 / Yandex Disk
  1483. \ "yandex"
  1484. Storage> 2
  1485. Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
  1486. Choose a number from below, or type in your own value
  1487. 1 / Enter AWS credentials in the next step
  1488. \ "false"
  1489. 2 / Get AWS credentials from the environment (env vars or IAM)
  1490. \ "true"
  1491. env_auth> 1
  1492. AWS Access Key ID - leave blank for anonymous access or runtime credentials.
  1493. access_key_id> access_key
  1494. AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
  1495. secret_access_key> secret_key
  1496. Region to connect to.
  1497. Choose a number from below, or type in your own value
  1498. / The default endpoint - a good choice if you are unsure.
  1499. 1 | US Region, Northern Virginia or Pacific Northwest.
  1500. | Leave location constraint empty.
  1501. \ "us-east-1"
  1502. / US West (Oregon) Region
  1503. 2 | Needs location constraint us-west-2.
  1504. \ "us-west-2"
  1505. / US West (Northern California) Region
  1506. 3 | Needs location constraint us-west-1.
  1507. \ "us-west-1"
  1508. / EU (Ireland) Region Region
  1509. 4 | Needs location constraint EU or eu-west-1.
  1510. \ "eu-west-1"
  1511. / EU (Frankfurt) Region
  1512. 5 | Needs location constraint eu-central-1.
  1513. \ "eu-central-1"
  1514. / Asia Pacific (Singapore) Region
  1515. 6 | Needs location constraint ap-southeast-1.
  1516. \ "ap-southeast-1"
  1517. / Asia Pacific (Sydney) Region
  1518. 7 | Needs location constraint ap-southeast-2.
  1519. \ "ap-southeast-2"
  1520. / Asia Pacific (Tokyo) Region
  1521. 8 | Needs location constraint ap-northeast-1.
  1522. \ "ap-northeast-1"
  1523. / South America (Sao Paulo) Region
  1524. 9 | Needs location constraint sa-east-1.
  1525. \ "sa-east-1"
  1526. / If using an S3 clone that only understands v2 signatures
  1527. 10 | eg Ceph/Dreamhost
  1528. | set this and make sure you set the endpoint.
  1529. \ "other-v2-signature"
  1530. / If using an S3 clone that understands v4 signatures set this
  1531. 11 | and make sure you set the endpoint.
  1532. \ "other-v4-signature"
  1533. region> 1
  1534. Endpoint for S3 API.
  1535. Leave blank if using AWS to use the default endpoint for the region.
  1536. Specify if using an S3 clone such as Ceph.
  1537. endpoint>
  1538. Location constraint - must be set to match the Region. Used when creating buckets only.
  1539. Choose a number from below, or type in your own value
  1540. 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
  1541. \ ""
  1542. 2 / US West (Oregon) Region.
  1543. \ "us-west-2"
  1544. 3 / US West (Northern California) Region.
  1545. \ "us-west-1"
  1546. 4 / EU (Ireland) Region.
  1547. \ "eu-west-1"
  1548. 5 / EU Region.
  1549. \ "EU"
  1550. 6 / Asia Pacific (Singapore) Region.
  1551. \ "ap-southeast-1"
  1552. 7 / Asia Pacific (Sydney) Region.
  1553. \ "ap-southeast-2"
  1554. 8 / Asia Pacific (Tokyo) Region.
  1555. \ "ap-northeast-1"
  1556. 9 / South America (Sao Paulo) Region.
  1557. \ "sa-east-1"
  1558. location_constraint> 1
  1559. Canned ACL used when creating buckets and/or storing objects in S3.
  1560. For more info visit http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
  1561. Choose a number from below, or type in your own value
  1562. 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
  1563. \ "private"
  1564. 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
  1565. \ "public-read"
  1566. / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
  1567. 3 | Granting this on a bucket is generally not recommended.
  1568. \ "public-read-write"
  1569. 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
  1570. \ "authenticated-read"
  1571. / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
  1572. 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
  1573. \ "bucket-owner-read"
  1574. / Both the object owner and the bucket owner get FULL_CONTROL over the object.
  1575. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
  1576. \ "bucket-owner-full-control"
  1577. acl> private
  1578. The server-side encryption algorithm used when storing this object in S3.
  1579. Choose a number from below, or type in your own value
  1580. 1 / None
  1581. \ ""
  1582. 2 / AES256
  1583. \ "AES256"
  1584. server_side_encryption>
  1585. Remote config
  1586. --------------------
  1587. [remote]
  1588. env_auth = false
  1589. access_key_id = access_key
  1590. secret_access_key = secret_key
  1591. region = us-east-1
  1592. endpoint =
  1593. location_constraint =
  1594. --------------------
  1595. y) Yes this is OK
  1596. e) Edit this remote
  1597. d) Delete this remote
  1598. y/e/d> y
  1599. This remote is called remote and can now be used like this
  1600.  
  1601. See all buckets
  1602.  
  1603. rclone lsd remote:
  1604. Make a new bucket
  1605.  
  1606. rclone mkdir remote:bucket
  1607. List the contents of a bucket
  1608.  
  1609. rclone ls remote:bucket
  1610. Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.
  1611.  
  1612. rclone sync /home/local/directory remote:bucket
  1613. Modified time
  1614.  
  1615. The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.
  1616.  
  1617. Multipart uploads
  1618.  
  1619. rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM.
  1620.  
  1621. Buckets and Regions
  1622.  
  1623. With Amazon S3 you can list buckets (rclone lsd) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region.
  1624.  
  1625. Authentication
  1626.  
  1627. There are two ways to supply rclone with a set of AWS credentials. In order of precedence:
  1628.  
  1629. Directly in the rclone configuration file (as configured by rclone config)
  1630. set access_key_id and secret_access_key
  1631. Runtime configuration:
  1632. set env_auth to true in the config file
  1633. Exporting the following environment variables before running rclone
  1634. Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY
  1635. Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY
  1636. Running rclone on an EC2 instance with an IAM role
  1637. If none of these option actually end up providing rclone with AWS credentials then S3 interaction will be non-authenticated (see below).
  1638.  
  1639. Anonymous access to public buckets
  1640.  
  1641. If you want to use rclone to access a public bucket, configure with a blank access_key_id and secret_access_key. Eg
  1642.  
  1643. No remotes found - make a new one
  1644. n) New remote
  1645. q) Quit config
  1646. n/q> n
  1647. name> anons3
  1648. What type of source is it?
  1649. Choose a number from below
  1650. 1) amazon cloud drive
  1651. 2) b2
  1652. 3) drive
  1653. 4) dropbox
  1654. 5) google cloud storage
  1655. 6) swift
  1656. 7) hubic
  1657. 8) local
  1658. 9) onedrive
  1659. 10) s3
  1660. 11) yandex
  1661. type> 10
  1662. Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
  1663. Choose a number from below, or type in your own value
  1664. * Enter AWS credentials in the next step
  1665. 1) false
  1666. * Get AWS credentials from the environment (env vars or IAM)
  1667. 2) true
  1668. env_auth> 1
  1669. AWS Access Key ID - leave blank for anonymous access or runtime credentials.
  1670. access_key_id>
  1671. AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
  1672. secret_access_key>
  1673. ...
  1674. Then use it as normal with the name of the public bucket, eg
  1675.  
  1676. rclone lsd anons3:1000genomes
  1677. You will be able to list and copy data but not upload it.
  1678.  
  1679. Ceph
  1680.  
  1681. Ceph is an object storage system which presents an Amazon S3 interface.
  1682.  
  1683. To use rclone with ceph, you need to set the following parameters in the config.
  1684.  
  1685. access_key_id = Whatever
  1686. secret_access_key = Whatever
  1687. endpoint = https://ceph.endpoint.goes.here/
  1688. region = other-v2-signature
  1689. Note also that Ceph sometimes puts / in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the / escaped as \/. Make sure you only write / in the secret access key.
  1690.  
  1691. Eg the dump from Ceph looks something like this (irrelevant keys removed).
  1692.  
  1693. {
  1694. "user_id": "xxx",
  1695. "display_name": "xxxx",
  1696. "keys": [
  1697. {
  1698. "user": "xxx",
  1699. "access_key": "xxxxxx",
  1700. "secret_key": "xxxxxx\/xxxx"
  1701. }
  1702. ],
  1703. }
  1704. Because this is a json dump, it is encoding the / as \/, so if you use the secret key as xxxxxx/xxxx it will work fine.
  1705.  
  1706. Minio
  1707.  
  1708. Minio is an object storage server built for cloud application developers and devops.
  1709.  
  1710. It is very easy to install and provides an S3 compatible server which can be used by rclone.
  1711.  
  1712. To use it, install Minio following the instructions from the web site.
  1713.  
  1714. When it configures itself Minio will print something like this
  1715.  
  1716. AccessKey: WLGDGYAQYIGI833EV05A SecretKey: BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF Region: us-east-1
  1717.  
  1718. Minio Object Storage:
  1719. http://127.0.0.1:9000
  1720. http://10.0.0.3:9000
  1721.  
  1722. Minio Browser:
  1723. http://127.0.0.1:9000
  1724. http://10.0.0.3:9000
  1725. These details need to go into rclone config like this. Note that it is important to put the region in as stated above.
  1726.  
  1727. env_auth> 1
  1728. access_key_id> WLGDGYAQYIGI833EV05A
  1729. secret_access_key> BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
  1730. region> us-east-1
  1731. endpoint> http://10.0.0.3:9000
  1732. location_constraint>
  1733. server_side_encryption>
  1734. Which makes the config file look like this
  1735.  
  1736. [minio]
  1737. env_auth = false
  1738. access_key_id = WLGDGYAQYIGI833EV05A
  1739. secret_access_key = BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
  1740. region = us-east-1
  1741. endpoint = http://10.0.0.3:9000
  1742. location_constraint =
  1743. server_side_encryption =
  1744. Minio doesn't support all the features of S3 yet. In particular it doesn't support MD5 checksums (ETags) or metadata. This means rclone can't check MD5SUMs or store the modified date. However you can work around this with the --size-only flag of rclone.
  1745.  
  1746. So once set up, for example to copy files into a bucket
  1747.  
  1748. rclone --size-only copy /path/to/files minio:bucket
  1749. Swift
  1750.  
  1751. Swift refers to Openstack Object Storage. Commercial implementations of that being:
  1752.  
  1753. Rackspace Cloud Files
  1754. Memset Memstore
  1755. Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.
  1756.  
  1757. Here is an example of making a swift configuration. First run
  1758.  
  1759. rclone config
  1760. This will guide you through an interactive setup process.
  1761.  
  1762. No remotes found - make a new one
  1763. n) New remote
  1764. s) Set configuration password
  1765. n/s> n
  1766. name> remote
  1767. Type of storage to configure.
  1768. Choose a number from below, or type in your own value
  1769. 1 / Amazon Drive
  1770. \ "amazon cloud drive"
  1771. 2 / Amazon S3 (also Dreamhost, Ceph)
  1772. \ "s3"
  1773. 3 / Backblaze B2
  1774. \ "b2"
  1775. 4 / Dropbox
  1776. \ "dropbox"
  1777. 5 / Google Cloud Storage (this is not Google Drive)
  1778. \ "google cloud storage"
  1779. 6 / Google Drive
  1780. \ "drive"
  1781. 7 / Hubic
  1782. \ "hubic"
  1783. 8 / Local Disk
  1784. \ "local"
  1785. 9 / Microsoft OneDrive
  1786. \ "onedrive"
  1787. 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  1788. \ "swift"
  1789. 11 / Yandex Disk
  1790. \ "yandex"
  1791. Storage> 10
  1792. User name to log in.
  1793. user> user_name
  1794. API key or password.
  1795. key> password_or_api_key
  1796. Authentication URL for server.
  1797. Choose a number from below, or type in your own value
  1798. 1 / Rackspace US
  1799. \ "https://auth.api.rackspacecloud.com/v1.0"
  1800. 2 / Rackspace UK
  1801. \ "https://lon.auth.api.rackspacecloud.com/v1.0"
  1802. 3 / Rackspace v2
  1803. \ "https://identity.api.rackspacecloud.com/v2.0"
  1804. 4 / Memset Memstore UK
  1805. \ "https://auth.storage.memset.com/v1.0"
  1806. 5 / Memset Memstore UK v2
  1807. \ "https://auth.storage.memset.com/v2.0"
  1808. 6 / OVH
  1809. \ "https://auth.cloud.ovh.net/v2.0"
  1810. auth> 1
  1811. User domain - optional (v3 auth)
  1812. domain> Default
  1813. Tenant name - optional
  1814. tenant>
  1815. Tenant domain - optional (v3 auth)
  1816. tenant_domain>
  1817. Region name - optional
  1818. region>
  1819. Storage URL - optional
  1820. storage_url>
  1821. Remote config
  1822. AuthVersion - optional - set to (1,2,3) if your auth URL has no version
  1823. auth_version>
  1824. --------------------
  1825. [remote]
  1826. user = user_name
  1827. key = password_or_api_key
  1828. auth = https://auth.api.rackspacecloud.com/v1.0
  1829. tenant =
  1830. region =
  1831. storage_url =
  1832. --------------------
  1833. y) Yes this is OK
  1834. e) Edit this remote
  1835. d) Delete this remote
  1836. y/e/d> y
  1837. This remote is called remote and can now be used like this
  1838.  
  1839. See all containers
  1840.  
  1841. rclone lsd remote:
  1842. Make a new container
  1843.  
  1844. rclone mkdir remote:container
  1845. List the contents of a container
  1846.  
  1847. rclone ls remote:container
  1848. Sync /home/local/directory to the remote container, deleting any excess files in the container.
  1849.  
  1850. rclone sync /home/local/directory remote:container
  1851. Specific options
  1852.  
  1853. Here are the command line options specific to this cloud storage system.
  1854.  
  1855. --swift-chunk-size=SIZE
  1856.  
  1857. Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.
  1858.  
  1859. Modified time
  1860.  
  1861. The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.
  1862.  
  1863. This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
  1864.  
  1865. Limitations
  1866.  
  1867. The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
  1868.  
  1869. Troubleshooting
  1870.  
  1871. Rclone gives Failed to create file system for "remote:": Bad Request
  1872.  
  1873. Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.
  1874.  
  1875. So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies flag.
  1876.  
  1877. Rclone gives Failed to create file system: Response didn't have storage storage url and auth token
  1878.  
  1879. This is most likely caused by forgetting to specify your tenant when setting up a swift remote.
  1880.  
  1881. Dropbox
  1882.  
  1883. Paths are specified as remote:path
  1884.  
  1885. Dropbox paths may be as deep as required, eg remote:directory/subdirectory.
  1886.  
  1887. The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config walks you through it.
  1888.  
  1889. Here is an example of how to make a remote called remote. First run:
  1890.  
  1891. rclone config
  1892. This will guide you through an interactive setup process:
  1893.  
  1894. n) New remote
  1895. d) Delete remote
  1896. q) Quit config
  1897. e/n/d/q> n
  1898. name> remote
  1899. Type of storage to configure.
  1900. Choose a number from below, or type in your own value
  1901. 1 / Amazon Drive
  1902. \ "amazon cloud drive"
  1903. 2 / Amazon S3 (also Dreamhost, Ceph)
  1904. \ "s3"
  1905. 3 / Backblaze B2
  1906. \ "b2"
  1907. 4 / Dropbox
  1908. \ "dropbox"
  1909. 5 / Google Cloud Storage (this is not Google Drive)
  1910. \ "google cloud storage"
  1911. 6 / Google Drive
  1912. \ "drive"
  1913. 7 / Hubic
  1914. \ "hubic"
  1915. 8 / Local Disk
  1916. \ "local"
  1917. 9 / Microsoft OneDrive
  1918. \ "onedrive"
  1919. 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  1920. \ "swift"
  1921. 11 / Yandex Disk
  1922. \ "yandex"
  1923. Storage> 4
  1924. Dropbox App Key - leave blank normally.
  1925. app_key>
  1926. Dropbox App Secret - leave blank normally.
  1927. app_secret>
  1928. Remote config
  1929. Please visit:
  1930. https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
  1931. Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
  1932. --------------------
  1933. [remote]
  1934. app_key =
  1935. app_secret =
  1936. token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
  1937. --------------------
  1938. y) Yes this is OK
  1939. e) Edit this remote
  1940. d) Delete this remote
  1941. y/e/d> y
  1942. You can then use it like this,
  1943.  
  1944. List directories in top level of your dropbox
  1945.  
  1946. rclone lsd remote:
  1947. List all the files in your dropbox
  1948.  
  1949. rclone ls remote:
  1950. To copy a local directory to a dropbox directory called backup
  1951.  
  1952. rclone copy /home/source remote:backup
  1953. Modified time and MD5SUMs
  1954.  
  1955. Dropbox doesn't provide the ability to set modification times in the V1 public API, so rclone can't support modified time with Dropbox.
  1956.  
  1957. This may change in the future - see these issues for details:
  1958.  
  1959. Dropbox V2 API
  1960. Allow syncs for remotes that can't set modtime on existing objects
  1961. Dropbox doesn't return any sort of checksum (MD5 or SHA1).
  1962.  
  1963. Together that means that syncs to dropbox will effectively have the --size-only flag set.
  1964.  
  1965. Specific options
  1966.  
  1967. Here are the command line options specific to this cloud storage system.
  1968.  
  1969. --dropbox-chunk-size=SIZE
  1970.  
  1971. Upload chunk size. Max 150M. The default is 128MB. Note that this isn't buffered into memory.
  1972.  
  1973. Limitations
  1974.  
  1975. Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
  1976.  
  1977. There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempt to upload one of those file names, but the sync won't fail.
  1978.  
  1979. If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbix:dir followed by an rclone rmdir dropbox:dir.
  1980.  
  1981. Google Cloud Storage
  1982.  
  1983. Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.
  1984.  
  1985. The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config walks you through it.
  1986.  
  1987. Here is an example of how to make a remote called remote. First run:
  1988.  
  1989. rclone config
  1990. This will guide you through an interactive setup process:
  1991.  
  1992. n) New remote
  1993. d) Delete remote
  1994. q) Quit config
  1995. e/n/d/q> n
  1996. name> remote
  1997. Type of storage to configure.
  1998. Choose a number from below, or type in your own value
  1999. 1 / Amazon Drive
  2000. \ "amazon cloud drive"
  2001. 2 / Amazon S3 (also Dreamhost, Ceph)
  2002. \ "s3"
  2003. 3 / Backblaze B2
  2004. \ "b2"
  2005. 4 / Dropbox
  2006. \ "dropbox"
  2007. 5 / Google Cloud Storage (this is not Google Drive)
  2008. \ "google cloud storage"
  2009. 6 / Google Drive
  2010. \ "drive"
  2011. 7 / Hubic
  2012. \ "hubic"
  2013. 8 / Local Disk
  2014. \ "local"
  2015. 9 / Microsoft OneDrive
  2016. \ "onedrive"
  2017. 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  2018. \ "swift"
  2019. 11 / Yandex Disk
  2020. \ "yandex"
  2021. Storage> 5
  2022. Google Application Client Id - leave blank normally.
  2023. client_id>
  2024. Google Application Client Secret - leave blank normally.
  2025. client_secret>
  2026. Project number optional - needed only for list/create/delete buckets - see your developer console.
  2027. project_number> 12345678
  2028. Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
  2029. service_account_file>
  2030. Access Control List for new objects.
  2031. Choose a number from below, or type in your own value
  2032. * Object owner gets OWNER access, and all Authenticated Users get READER access.
  2033. 1) authenticatedRead
  2034. * Object owner gets OWNER access, and project team owners get OWNER access.
  2035. 2) bucketOwnerFullControl
  2036. * Object owner gets OWNER access, and project team owners get READER access.
  2037. 3) bucketOwnerRead
  2038. * Object owner gets OWNER access [default if left blank].
  2039. 4) private
  2040. * Object owner gets OWNER access, and project team members get access according to their roles.
  2041. 5) projectPrivate
  2042. * Object owner gets OWNER access, and all Users get READER access.
  2043. 6) publicRead
  2044. object_acl> 4
  2045. Access Control List for new buckets.
  2046. Choose a number from below, or type in your own value
  2047. * Project team owners get OWNER access, and all Authenticated Users get READER access.
  2048. 1) authenticatedRead
  2049. * Project team owners get OWNER access [default if left blank].
  2050. 2) private
  2051. * Project team members get access according to their roles.
  2052. 3) projectPrivate
  2053. * Project team owners get OWNER access, and all Users get READER access.
  2054. 4) publicRead
  2055. * Project team owners get OWNER access, and all Users get WRITER access.
  2056. 5) publicReadWrite
  2057. bucket_acl> 2
  2058. Remote config
  2059. Remote config
  2060. Use auto config?
  2061. * Say Y if not sure
  2062. * Say N if you are working on a remote or headless machine or Y didn't work
  2063. y) Yes
  2064. n) No
  2065. y/n> y
  2066. If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
  2067. Log in and authorize rclone for access
  2068. Waiting for code...
  2069. Got code
  2070. --------------------
  2071. [remote]
  2072. type = google cloud storage
  2073. client_id =
  2074. client_secret =
  2075. token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
  2076. project_number = 12345678
  2077. object_acl = private
  2078. bucket_acl = private
  2079. --------------------
  2080. y) Yes this is OK
  2081. e) Edit this remote
  2082. d) Delete this remote
  2083. y/e/d> y
  2084. Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.
  2085.  
  2086. This remote is called remote and can now be used like this
  2087.  
  2088. See all the buckets in your project
  2089.  
  2090. rclone lsd remote:
  2091. Make a new bucket
  2092.  
  2093. rclone mkdir remote:bucket
  2094. List the contents of a bucket
  2095.  
  2096. rclone ls remote:bucket
  2097. Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.
  2098.  
  2099. rclone sync /home/local/directory remote:bucket
  2100. Service Account support
  2101.  
  2102. You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.
  2103.  
  2104. To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.
  2105.  
  2106. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow.
  2107.  
  2108. Modified time
  2109.  
  2110. Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns.
  2111.  
  2112. Amazon Drive
  2113.  
  2114. Paths are specified as remote:path
  2115.  
  2116. Paths may be as deep as required, eg remote:directory/subdirectory.
  2117.  
  2118. The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config walks you through it.
  2119.  
  2120. Here is an example of how to make a remote called remote. First run:
  2121.  
  2122. rclone config
  2123. This will guide you through an interactive setup process:
  2124.  
  2125. n) New remote
  2126. d) Delete remote
  2127. q) Quit config
  2128. e/n/d/q> n
  2129. name> remote
  2130. Type of storage to configure.
  2131. Choose a number from below, or type in your own value
  2132. 1 / Amazon Drive
  2133. \ "amazon cloud drive"
  2134. 2 / Amazon S3 (also Dreamhost, Ceph)
  2135. \ "s3"
  2136. 3 / Backblaze B2
  2137. \ "b2"
  2138. 4 / Dropbox
  2139. \ "dropbox"
  2140. 5 / Google Cloud Storage (this is not Google Drive)
  2141. \ "google cloud storage"
  2142. 6 / Google Drive
  2143. \ "drive"
  2144. 7 / Hubic
  2145. \ "hubic"
  2146. 8 / Local Disk
  2147. \ "local"
  2148. 9 / Microsoft OneDrive
  2149. \ "onedrive"
  2150. 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  2151. \ "swift"
  2152. 11 / Yandex Disk
  2153. \ "yandex"
  2154. Storage> 1
  2155. Amazon Application Client Id - leave blank normally.
  2156. client_id>
  2157. Amazon Application Client Secret - leave blank normally.
  2158. client_secret>
  2159. Remote config
  2160. If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
  2161. Log in and authorize rclone for access
  2162. Waiting for code...
  2163. Got code
  2164. --------------------
  2165. [remote]
  2166. client_id =
  2167. client_secret =
  2168. token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
  2169. --------------------
  2170. y) Yes this is OK
  2171. e) Edit this remote
  2172. d) Delete this remote
  2173. y/e/d> y
  2174. See the remote setup docs for how to set it up on a machine with no Internet browser available.
  2175.  
  2176. Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.
  2177.  
  2178. Once configured you can then use rclone like this,
  2179.  
  2180. List directories in top level of your Amazon Drive
  2181.  
  2182. rclone lsd remote:
  2183. List all the files in your Amazon Drive
  2184.  
  2185. rclone ls remote:
  2186. To copy a local directory to an Amazon Drive directory called backup
  2187.  
  2188. rclone copy /home/source remote:backup
  2189. Modified time and MD5SUMs
  2190.  
  2191. Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.
  2192.  
  2193. It does store MD5SUMs so for a more accurate sync, you can use the --checksum flag.
  2194.  
  2195. Deleting files
  2196.  
  2197. Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website.
  2198.  
  2199. Specific options
  2200.  
  2201. Here are the command line options specific to this cloud storage system.
  2202.  
  2203. --acd-templink-threshold=SIZE
  2204.  
  2205. Files this size or more will be downloaded via their tempLink. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.
  2206.  
  2207. To download files above this threshold, rclone requests a tempLink which downloads the file through a temporary URL directly from the underlying S3 storage.
  2208.  
  2209. --acd-upload-wait-time=TIME
  2210.  
  2211. Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This controls the time rclone waits - 2 minutes by default. You might want to increase the time if you are having problems with very big files. Upload with the -v flag for more info.
  2212.  
  2213. Limitations
  2214.  
  2215. Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
  2216.  
  2217. Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries flag) which should hopefully work around this problem.
  2218.  
  2219. Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.
  2220.  
  2221. At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.
  2222.  
  2223. Unfortunatly there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size=50GB option to limit the maximum size of uploaded files.
  2224.  
  2225. Microsoft One Drive
  2226.  
  2227. Paths are specified as remote:path
  2228.  
  2229. Paths may be as deep as required, eg remote:directory/subdirectory.
  2230.  
  2231. The initial setup for One Drive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it.
  2232.  
  2233. Here is an example of how to make a remote called remote. First run:
  2234.  
  2235. rclone config
  2236. This will guide you through an interactive setup process:
  2237.  
  2238. No remotes found - make a new one
  2239. n) New remote
  2240. s) Set configuration password
  2241. n/s> n
  2242. name> remote
  2243. Type of storage to configure.
  2244. Choose a number from below, or type in your own value
  2245. 1 / Amazon Drive
  2246. \ "amazon cloud drive"
  2247. 2 / Amazon S3 (also Dreamhost, Ceph)
  2248. \ "s3"
  2249. 3 / Backblaze B2
  2250. \ "b2"
  2251. 4 / Dropbox
  2252. \ "dropbox"
  2253. 5 / Google Cloud Storage (this is not Google Drive)
  2254. \ "google cloud storage"
  2255. 6 / Google Drive
  2256. \ "drive"
  2257. 7 / Hubic
  2258. \ "hubic"
  2259. 8 / Local Disk
  2260. \ "local"
  2261. 9 / Microsoft OneDrive
  2262. \ "onedrive"
  2263. 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  2264. \ "swift"
  2265. 11 / Yandex Disk
  2266. \ "yandex"
  2267. Storage> 9
  2268. Microsoft App Client Id - leave blank normally.
  2269. client_id>
  2270. Microsoft App Client Secret - leave blank normally.
  2271. client_secret>
  2272. Remote config
  2273. Use auto config?
  2274. * Say Y if not sure
  2275. * Say N if you are working on a remote or headless machine
  2276. y) Yes
  2277. n) No
  2278. y/n> y
  2279. If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
  2280. Log in and authorize rclone for access
  2281. Waiting for code...
  2282. Got code
  2283. --------------------
  2284. [remote]
  2285. client_id =
  2286. client_secret =
  2287. token = {"access_token":"XXXXXX"}
  2288. --------------------
  2289. y) Yes this is OK
  2290. e) Edit this remote
  2291. d) Delete this remote
  2292. y/e/d> y
  2293. See the remote setup docs for how to set it up on a machine with no Internet browser available.
  2294.  
  2295. Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.
  2296.  
  2297. Once configured you can then use rclone like this,
  2298.  
  2299. List directories in top level of your One Drive
  2300.  
  2301. rclone lsd remote:
  2302. List all the files in your One Drive
  2303.  
  2304. rclone ls remote:
  2305. To copy a local directory to an One Drive directory called backup
  2306.  
  2307. rclone copy /home/source remote:backup
  2308. Modified time and hashes
  2309.  
  2310. One Drive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
  2311.  
  2312. One drive supports SHA1 type hashes, so you can use --checksum flag.
  2313.  
  2314. Deleting files
  2315.  
  2316. Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the One Drive website.
  2317.  
  2318. Specific options
  2319.  
  2320. Here are the command line options specific to this cloud storage system.
  2321.  
  2322. --onedrive-chunk-size=SIZE
  2323.  
  2324. Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.
  2325.  
  2326. --onedrive-upload-cutoff=SIZE
  2327.  
  2328. Cutoff for switching to chunked upload - must be <= 100MB. The default is 10MB.
  2329.  
  2330. Limitations
  2331.  
  2332. Note that One Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
  2333.  
  2334. Rclone only supports your default One Drive, and doesn't work with One Drive for business. Both these issues may be fixed at some point depending on user demand!
  2335.  
  2336. There are quite a few characters that can't be in One Drive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
  2337.  
  2338. Hubic
  2339.  
  2340. Paths are specified as remote:path
  2341.  
  2342. Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.
  2343.  
  2344. The initial setup for Hubic involves getting a token from Hubic which you need to do in your browser. rclone config walks you through it.
  2345.  
  2346. Here is an example of how to make a remote called remote. First run:
  2347.  
  2348. rclone config
  2349. This will guide you through an interactive setup process:
  2350.  
  2351. n) New remote
  2352. s) Set configuration password
  2353. n/s> n
  2354. name> remote
  2355. Type of storage to configure.
  2356. Choose a number from below, or type in your own value
  2357. 1 / Amazon Drive
  2358. \ "amazon cloud drive"
  2359. 2 / Amazon S3 (also Dreamhost, Ceph)
  2360. \ "s3"
  2361. 3 / Backblaze B2
  2362. \ "b2"
  2363. 4 / Dropbox
  2364. \ "dropbox"
  2365. 5 / Google Cloud Storage (this is not Google Drive)
  2366. \ "google cloud storage"
  2367. 6 / Google Drive
  2368. \ "drive"
  2369. 7 / Hubic
  2370. \ "hubic"
  2371. 8 / Local Disk
  2372. \ "local"
  2373. 9 / Microsoft OneDrive
  2374. \ "onedrive"
  2375. 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  2376. \ "swift"
  2377. 11 / Yandex Disk
  2378. \ "yandex"
  2379. Storage> 7
  2380. Hubic Client Id - leave blank normally.
  2381. client_id>
  2382. Hubic Client Secret - leave blank normally.
  2383. client_secret>
  2384. Remote config
  2385. Use auto config?
  2386. * Say Y if not sure
  2387. * Say N if you are working on a remote or headless machine
  2388. y) Yes
  2389. n) No
  2390. y/n> y
  2391. If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
  2392. Log in and authorize rclone for access
  2393. Waiting for code...
  2394. Got code
  2395. --------------------
  2396. [remote]
  2397. client_id =
  2398. client_secret =
  2399. token = {"access_token":"XXXXXX"}
  2400. --------------------
  2401. y) Yes this is OK
  2402. e) Edit this remote
  2403. d) Delete this remote
  2404. y/e/d> y
  2405. See the remote setup docs for how to set it up on a machine with no Internet browser available.
  2406.  
  2407. Note that rclone runs a webserver on your local machine to collect the token as returned from Hubic. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.
  2408.  
  2409. Once configured you can then use rclone like this,
  2410.  
  2411. List containers in the top level of your Hubic
  2412.  
  2413. rclone lsd remote:
  2414. List all the files in your Hubic
  2415.  
  2416. rclone ls remote:
  2417. To copy a local directory to an Hubic directory called backup
  2418.  
  2419. rclone copy /home/source remote:backup
  2420. If you want the directory to be visible in the official Hubic browser, you need to copy your files to the default directory
  2421.  
  2422. rclone copy /home/source remote:default/backup
  2423. Modified time
  2424.  
  2425. The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.
  2426.  
  2427. This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
  2428.  
  2429. Note that Hubic wraps the Swift backend, so most of the properties of are the same.
  2430.  
  2431. Limitations
  2432.  
  2433. This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.
  2434.  
  2435. The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
  2436.  
  2437. Backblaze B2
  2438.  
  2439. B2 is Backblaze's cloud storage system.
  2440.  
  2441. Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.
  2442.  
  2443. Here is an example of making a b2 configuration. First run
  2444.  
  2445. rclone config
  2446. This will guide you through an interactive setup process. You will need your account number (a short hex number) and key (a long hex number) which you can get from the b2 control panel.
  2447.  
  2448. No remotes found - make a new one
  2449. n) New remote
  2450. q) Quit config
  2451. n/q> n
  2452. name> remote
  2453. Type of storage to configure.
  2454. Choose a number from below, or type in your own value
  2455. 1 / Amazon Drive
  2456. \ "amazon cloud drive"
  2457. 2 / Amazon S3 (also Dreamhost, Ceph)
  2458. \ "s3"
  2459. 3 / Backblaze B2
  2460. \ "b2"
  2461. 4 / Dropbox
  2462. \ "dropbox"
  2463. 5 / Google Cloud Storage (this is not Google Drive)
  2464. \ "google cloud storage"
  2465. 6 / Google Drive
  2466. \ "drive"
  2467. 7 / Hubic
  2468. \ "hubic"
  2469. 8 / Local Disk
  2470. \ "local"
  2471. 9 / Microsoft OneDrive
  2472. \ "onedrive"
  2473. 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  2474. \ "swift"
  2475. 11 / Yandex Disk
  2476. \ "yandex"
  2477. Storage> 3
  2478. Account ID
  2479. account> 123456789abc
  2480. Application Key
  2481. key> 0123456789abcdef0123456789abcdef0123456789
  2482. Endpoint for the service - leave blank normally.
  2483. endpoint>
  2484. Remote config
  2485. --------------------
  2486. [remote]
  2487. account = 123456789abc
  2488. key = 0123456789abcdef0123456789abcdef0123456789
  2489. endpoint =
  2490. --------------------
  2491. y) Yes this is OK
  2492. e) Edit this remote
  2493. d) Delete this remote
  2494. y/e/d> y
  2495. This remote is called remote and can now be used like this
  2496.  
  2497. See all buckets
  2498.  
  2499. rclone lsd remote:
  2500. Make a new bucket
  2501.  
  2502. rclone mkdir remote:bucket
  2503. List the contents of a bucket
  2504.  
  2505. rclone ls remote:bucket
  2506. Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.
  2507.  
  2508. rclone sync /home/local/directory remote:bucket
  2509. Modified time
  2510.  
  2511. The modified time is stored as metadata on the object as X-Bz-Info-src_last_modified_millis as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time.
  2512.  
  2513. Modified times are used in syncing and are fully supported except in the case of updating a modification time on an existing object. In this case the object will be uploaded again as B2 doesn't have an API method to set the modification time independent of doing an upload.
  2514.  
  2515. SHA1 checksums
  2516.  
  2517. The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process.
  2518.  
  2519. Large files which are uploaded in chunks will store their SHA1 on the object as X-Bz-Info-large_file_sha1 as recommended by Backblaze.
  2520.  
  2521. Transfers
  2522.  
  2523. Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equiped laptop the optimum setting is about --transfers 32 though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4 is definitely too low for Backblaze B2 though.
  2524.  
  2525. Note that uploading big files (bigger than 200 MB by default) will use a 96 MB RAM buffer by default. There can be at most --transfers of these in use at any moment, so this sets the upper limit on the memory used.
  2526.  
  2527. Versions
  2528.  
  2529. When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will still be available.
  2530.  
  2531. Old versions of files are visible using the --b2-versions flag.
  2532.  
  2533. If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg rclone cleanup remote:bucket/path/to/stuff.
  2534.  
  2535. When you purge a bucket, the current and the old versions will be deleted then the bucket will be deleted.
  2536.  
  2537. However delete will cause the current versions of the files to become hidden old versions.
  2538.  
  2539. Here is a session showing the listing and and retreival of an old version followed by a cleanup of the old versions.
  2540.  
  2541. Show current version and all the versions with --b2-versions flag.
  2542.  
  2543. $ rclone -q ls b2:cleanup-test
  2544. 9 one.txt
  2545.  
  2546. $ rclone -q --b2-versions ls b2:cleanup-test
  2547. 9 one.txt
  2548. 8 one-v2016-07-04-141032-000.txt
  2549. 16 one-v2016-07-04-141003-000.txt
  2550. 15 one-v2016-07-02-155621-000.txt
  2551. Retreive an old verson
  2552.  
  2553. $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
  2554.  
  2555. $ ls -l /tmp/one-v2016-07-04-141003-000.txt
  2556. -rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
  2557. Clean up all the old versions and show that they've gone.
  2558.  
  2559. $ rclone -q cleanup b2:cleanup-test
  2560.  
  2561. $ rclone -q ls b2:cleanup-test
  2562. 9 one.txt
  2563.  
  2564. $ rclone -q --b2-versions ls b2:cleanup-test
  2565. 9 one.txt
  2566. Specific options
  2567.  
  2568. Here are the command line options specific to this cloud storage system.
  2569.  
  2570. --b2-chunk-size valuee=SIZE
  2571.  
  2572. When uploading large files chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of --transfers chunks in progress at once. 100,000,000 Bytes is the minimim size (default 96M).
  2573.  
  2574. --b2-upload-cutoff=SIZE
  2575.  
  2576. Cutoff for switching to chunked upload (default 190.735 MiB == 200 MB). Files above this size will be uploaded in chunks of --b2-chunk-size.
  2577.  
  2578. This value should be set no larger than 4.657GiB (== 5GB) as this is the largest file size that can be uploaded.
  2579.  
  2580. --b2-test-mode=FLAG
  2581.  
  2582. This is for debugging purposes only.
  2583.  
  2584. Setting FLAG to one of the strings below will cause b2 to return specific errors for debugging purposes.
  2585.  
  2586. fail_some_uploads
  2587. expire_some_account_authorization_tokens
  2588. force_cap_exceeded
  2589. These will be set in the X-Bz-Test-Mode header which is documented in the b2 integrations checklist.
  2590.  
  2591. --b2-versions
  2592.  
  2593. When set rclone will show and act on older versions of files. For example
  2594.  
  2595. Listing without --b2-versions
  2596.  
  2597. $ rclone -q ls b2:cleanup-test
  2598. 9 one.txt
  2599. And with
  2600.  
  2601. $ rclone -q --b2-versions ls b2:cleanup-test
  2602. 9 one.txt
  2603. 8 one-v2016-07-04-141032-000.txt
  2604. 16 one-v2016-07-04-141003-000.txt
  2605. 15 one-v2016-07-02-155621-000.txt
  2606. Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them.
  2607.  
  2608. Note that when using --b2-versions no file write operations are permitted, so you can't upload files or delete them.
  2609.  
  2610. Yandex Disk
  2611.  
  2612. Yandex Disk is a cloud storage solution created by Yandex.
  2613.  
  2614. Yandex paths may be as deep as required, eg remote:directory/subdirectory.
  2615.  
  2616. Here is an example of making a yandex configuration. First run
  2617.  
  2618. rclone config
  2619. This will guide you through an interactive setup process:
  2620.  
  2621. No remotes found - make a new one
  2622. n) New remote
  2623. s) Set configuration password
  2624. n/s> n
  2625. name> remote
  2626. Type of storage to configure.
  2627. Choose a number from below, or type in your own value
  2628. 1 / Amazon Drive
  2629. \ "amazon cloud drive"
  2630. 2 / Amazon S3 (also Dreamhost, Ceph)
  2631. \ "s3"
  2632. 3 / Backblaze B2
  2633. \ "b2"
  2634. 4 / Dropbox
  2635. \ "dropbox"
  2636. 5 / Google Cloud Storage (this is not Google Drive)
  2637. \ "google cloud storage"
  2638. 6 / Google Drive
  2639. \ "drive"
  2640. 7 / Hubic
  2641. \ "hubic"
  2642. 8 / Local Disk
  2643. \ "local"
  2644. 9 / Microsoft OneDrive
  2645. \ "onedrive"
  2646. 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  2647. \ "swift"
  2648. 11 / Yandex Disk
  2649. \ "yandex"
  2650. Storage> 11
  2651. Yandex Client Id - leave blank normally.
  2652. client_id>
  2653. Yandex Client Secret - leave blank normally.
  2654. client_secret>
  2655. Remote config
  2656. Use auto config?
  2657. * Say Y if not sure
  2658. * Say N if you are working on a remote or headless machine
  2659. y) Yes
  2660. n) No
  2661. y/n> y
  2662. If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
  2663. Log in and authorize rclone for access
  2664. Waiting for code...
  2665. Got code
  2666. --------------------
  2667. [remote]
  2668. client_id =
  2669. client_secret =
  2670. token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
  2671. --------------------
  2672. y) Yes this is OK
  2673. e) Edit this remote
  2674. d) Delete this remote
  2675. y/e/d> y
  2676. See the remote setup docs for how to set it up on a machine with no Internet browser available.
  2677.  
  2678. Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.
  2679.  
  2680. Once configured you can then use rclone like this,
  2681.  
  2682. See top level directories
  2683.  
  2684. rclone lsd remote:
  2685. Make a new directory
  2686.  
  2687. rclone mkdir remote:directory
  2688. List the contents of a directory
  2689.  
  2690. rclone ls remote:directory
  2691. Sync /home/local/directory to the remote path, deleting any excess files in the path.
  2692.  
  2693. rclone sync /home/local/directory remote:directory
  2694. Modified time
  2695.  
  2696. Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format.
  2697.  
  2698. MD5 checksums
  2699.  
  2700. MD5 checksums are natively supported by Yandex Disk.
  2701.  
  2702. Crypt
  2703.  
  2704. The crypt remote encrypts and decrypts another remote.
  2705.  
  2706. To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example.
  2707.  
  2708. First check your chosen remote is working - we'll call it remote:path in these docs. Note that anything inside remote:path will be encrypted and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket. If you just use s3: then rclone will make encrypted bucket names too (if using file name encryption) which may or may not be what you want.
  2709.  
  2710. Now configure crypt using rclone config. We will call this one secret to differentiate it from the remote.
  2711.  
  2712. No remotes found - make a new one
  2713. n) New remote
  2714. s) Set configuration password
  2715. q) Quit config
  2716. n/s/q> n
  2717. name> secret
  2718. Type of storage to configure.
  2719. Choose a number from below, or type in your own value
  2720. 1 / Amazon Drive
  2721. \ "amazon cloud drive"
  2722. 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
  2723. \ "s3"
  2724. 3 / Backblaze B2
  2725. \ "b2"
  2726. 4 / Dropbox
  2727. \ "dropbox"
  2728. 5 / Encrypt/Decrypt a remote
  2729. \ "crypt"
  2730. 6 / Google Cloud Storage (this is not Google Drive)
  2731. \ "google cloud storage"
  2732. 7 / Google Drive
  2733. \ "drive"
  2734. 8 / Hubic
  2735. \ "hubic"
  2736. 9 / Local Disk
  2737. \ "local"
  2738. 10 / Microsoft OneDrive
  2739. \ "onedrive"
  2740. 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
  2741. \ "swift"
  2742. 12 / Yandex Disk
  2743. \ "yandex"
  2744. Storage> 5
  2745. Remote to encrypt/decrypt.
  2746. remote> remote:path
  2747. How to encrypt the filenames.
  2748. Choose a number from below, or type in your own value
  2749. 1 / Don't encrypt the file names. Adds a ".bin" extension only.
  2750. \ "off"
  2751. 2 / Encrypt the filenames see the docs for the details.
  2752. \ "standard"
  2753. filename_encryption> 2
  2754. Password or pass phrase for encryption.
  2755. y) Yes type in my own password
  2756. g) Generate random password
  2757. y/g> y
  2758. Enter the password:
  2759. password:
  2760. Confirm the password:
  2761. password:
  2762. Password or pass phrase for salt. Optional but recommended.
  2763. Should be different to the previous password.
  2764. y) Yes type in my own password
  2765. g) Generate random password
  2766. n) No leave this optional password blank
  2767. y/g/n> g
  2768. Password strength in bits.
  2769. 64 is just about memorable
  2770. 128 is secure
  2771. 1024 is the maximum
  2772. Bits> 128
  2773. Your password is: JAsJvRcgR-_veXNfy_sGmQ
  2774. Use this password?
  2775. y) Yes
  2776. n) No
  2777. y/n> y
  2778. Remote config
  2779. --------------------
  2780. [secret]
  2781. remote = remote:path
  2782. filename_encryption = standard
  2783. password = CfDxopZIXFG0Oo-ac7dPLWWOHkNJbw
  2784. password2 = HYUpfuzHJL8qnX9fOaIYijq0xnVLwyVzp3y4SF3TwYqAU6HLysk
  2785. --------------------
  2786. y) Yes this is OK
  2787. e) Edit this remote
  2788. d) Delete this remote
  2789. y/e/d> y
  2790. Important The password is stored in the config file is lightly obscured so it isn't immediately obvious what it is. It is in no way secure unless you use config file encryption.
  2791.  
  2792. A long passphrase is recommended, or you can use a random one. Note that if you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible - all the secrets used are derived from those two passwords/passphrases.
  2793.  
  2794. Note that rclone does not encrypt * file length - this can be calcuated within 16 bytes * modification time - used for syncing
  2795.  
  2796. Example
  2797.  
  2798. To test I made a little directory of files using "standard" file name encryption.
  2799.  
  2800. plaintext/
  2801. +-- file0.txt
  2802. +-- file1.txt
  2803. +-- subdir
  2804. +-- file2.txt
  2805. +-- file3.txt
  2806. +-- subsubdir
  2807. +-- file4.txt
  2808. Copy these to the remote and list them back
  2809.  
  2810. $ rclone -q copy plaintext secret:
  2811. $ rclone -q ls secret:
  2812. 7 file1.txt
  2813. 6 file0.txt
  2814. 8 subdir/file2.txt
  2815. 10 subdir/subsubdir/file4.txt
  2816. 9 subdir/file3.txt
  2817. Now see what that looked like when encrypted
  2818.  
  2819. $ rclone -q ls remote:path
  2820. 55 hagjclgavj2mbiqm6u6cnjjqcg
  2821. 54 v05749mltvv1tf4onltun46gls
  2822. 57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
  2823. 58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
  2824. 56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps
  2825. Note that this retains the directory structure which means you can do this
  2826.  
  2827. $ rclone -q ls secret:subdir
  2828. 8 file2.txt
  2829. 9 file3.txt
  2830. 10 subsubdir/file4.txt
  2831. If don't use file name encryption then the remote will look like this - note the .bin extensions added to prevent the cloud provider attempting to interpret the data.
  2832.  
  2833. $ rclone -q ls remote:path
  2834. 54 file0.txt.bin
  2835. 57 subdir/file3.txt.bin
  2836. 56 subdir/file2.txt.bin
  2837. 58 subdir/subsubdir/file4.txt.bin
  2838. 55 file1.txt.bin
  2839. File name encryption modes
  2840.  
  2841. Here are some of the features of the file name encryption modes
  2842.  
  2843. Off * doesn't hide file names or directory structure * allows for longer file names (~246 characters) * can use sub paths and copy single files
  2844.  
  2845. Standard * file names encrypted * file names can't be as long (~156 characters) * can use sub paths and copy single files * directory structure visibile * identical files names will have identical uploaded names * can use shortcuts to shorten the directory recursion
  2846.  
  2847. Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.
  2848.  
  2849. There may be an even more secure file name encryption mode in the future which will address the long file name problem.
  2850.  
  2851. File formats
  2852.  
  2853. File encryption
  2854.  
  2855. Files are encrypted 1:1 source file to destination object. The file has a header and is divided into chunks.
  2856.  
  2857. Header
  2858.  
  2859. 8 bytes magic string RCLONE\x00\x00
  2860. 24 bytes Nonce (IV)
  2861. The initial nonce is generated from the operating systems crypto strong random number genrator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being re-used is miniscule. If you wrote an exabyte of data (10�8 bytes) you would have a probability of approximately 2�10?�� of re-using a nonce.
  2862.  
  2863. Chunk
  2864.  
  2865. Each chunk will contain 64kB of data, except for the last one which may have less data. The data chunk is in standard NACL secretbox format. Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate messages.
  2866.  
  2867. Each chunk contains:
  2868.  
  2869. 16 Bytes of Poly1305 authenticator
  2870. 1 - 65536 bytes XSalsa20 encrypted data
  2871. 64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big.
  2872.  
  2873. This uses a 32 byte (256 bit key) key derived from the user password.
  2874.  
  2875. Examples
  2876.  
  2877. 1 byte file will encrypt to
  2878.  
  2879. 32 bytes header
  2880. 17 bytes data chunk
  2881. 49 bytes total
  2882.  
  2883. 1MB (1048576 bytes) file will encrypt to
  2884.  
  2885. 32 bytes header
  2886. 16 chunks of 65568 bytes
  2887. 1049120 bytes total (a 0.05% overhead). This is the overhead for big files.
  2888.  
  2889. Name encryption
  2890.  
  2891. File names are encrypted segment by segment - the path is broken up into / separated strings and these are encrypted individually.
  2892.  
  2893. File segments are padded using using PKCS#7 to a multiple of 16 bytes before encryption.
  2894.  
  2895. They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.
  2896.  
  2897. This makes for determinstic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system.
  2898.  
  2899. This means that
  2900.  
  2901. filenames with the same name will encrypt the same
  2902. filenames which start the same won't have a common prefix
  2903. This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password.
  2904.  
  2905. After encryption they are written out using a modified version of standard base32 encoding as described in RFC4648. The standard encoding is modified in two ways:
  2906.  
  2907. it becomes lower case (no-one likes upper case filenames!)
  2908. we strip the padding character =
  2909. base32 is used rather than the more efficient base64 so rclone can be used on case insensitive remotes (eg Windows, Amazon Drive).
  2910.  
  2911. Key derivation
  2912.  
  2913. Rclone uses scrypt with parameters N=16384, r=8, p=1 with a an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.
  2914.  
  2915. scrypt makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection agains this you should always use a salt.
  2916.  
  2917. Local Filesystem
  2918.  
  2919. Local paths are specified as normal filesystem paths, eg /path/to/wherever, so
  2920.  
  2921. rclone sync /home/source /tmp/destination
  2922. Will sync /home/source to /tmp/destination
  2923.  
  2924. These can be configured into the config file for consistencies sake, but it is probably easier not to.
  2925.  
  2926. Modified time
  2927.  
  2928. Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
  2929.  
  2930. Filenames
  2931.  
  2932. Filenames are expected to be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.
  2933.  
  2934. There is a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files names. If you are using an old Linux filesystem with non UTF-8 file names (eg latin1) then you can use the convmv tool to convert the filesystem to UTF-8. This tool is available in most distributions' package managers.
  2935.  
  2936. If an invalid (non-UTF8) filename is read, the invalid caracters will be replaced with the unicode replacement character, '?'. rclone will emit a debug message in this case (use -v to see), eg
  2937.  
  2938. Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"
  2939. Long paths on Windows
  2940.  
  2941. Rclone handles long paths automatically, by converting all paths to long UNC paths which allows paths up to 32,767 characters.
  2942.  
  2943. This is why you will see that your paths, for instance c:\files is converted to the UNC path \\?\c:\files in the output, and \\server\share is converted to \\?\UNC\server\share.
  2944.  
  2945. However, in rare cases this may cause problems with buggy file system drivers like EncFS. To disable UNC conversion globally, add this to your .rclone.conf file:
  2946.  
  2947. [local]
  2948. nounc = true
  2949. If you want to selectively disable UNC, you can add it to a separate entry like this:
  2950.  
  2951. [nounc]
  2952. type = local
  2953. nounc = true
  2954. And use rclone like this:
  2955.  
  2956. rclone copy c:\src nounc:z:\dst
  2957.  
  2958. This will use UNC paths on c:\src but not on z:\dst. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.
  2959.  
  2960. Changelog
  2961.  
  2962. v1.33 - 2016-08-24
  2963. New Features
  2964. Implement encryption
  2965. data encrypted in NACL secretbox format
  2966. with optional file name encryption
  2967. New commands
  2968. rclone mount - implements FUSE mounting of remotes (EXPERIMENTAL)
  2969. works on Linux, FreeBSD and OS X (need testers for the last 2!)
  2970. rclone cat - outputs remote file or files to the terminal
  2971. rclone genautocomplete - command to make a bash completion script for rclone
  2972. Editing a remote using rclone config now goes through the wizard
  2973. Compile with go 1.7 - this fixes rclone on macOS Sierra and on 386 processors
  2974. Use cobra for sub commands and docs generation
  2975. drive
  2976. Document how to make your own client_id
  2977. s3
  2978. User-configurable Amazon S3 ACL (thanks Radek �enfeld)
  2979. b2
  2980. Fix stats accounting for upload - no more jumping to 100% done
  2981. On cleanup delete hide marker if it is the current file
  2982. New B2 API endpoint (thanks Per Cederberg)
  2983. Set maximum backoff to 5 Minutes
  2984. onedrive
  2985. Fix URL escaping in file names - eg uploading files with + in them.
  2986. amazon cloud drive
  2987. Fix token expiry during large uploads
  2988. Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
  2989. local
  2990. Fix filenames with invalid UTF-8 not being uploaded
  2991. Fix problem with some UTF-8 characters on OS X
  2992. v1.32 - 2016-07-13
  2993. Backblaze B2
  2994. Fix upload of files large files not in root
  2995. v1.31 - 2016-07-13
  2996. New Features
  2997. Reduce memory on sync by about 50%
  2998. Implement --no-traverse flag to stop copy traversing the destination remote.
  2999. This can be used to reduce memory usage down to the smallest possible.
  3000. Useful to copy a small number of files into a large destination folder.
  3001. Implement cleanup command for emptying trash / removing old versions of files
  3002. Currently B2 only
  3003. Single file handling improved
  3004. Now copied with --files-from
  3005. Automatically sets --no-traverse when copying a single file
  3006. Info on using installing with ansible - thanks Stefan Weichinger
  3007. Implement --no-update-modtime flag to stop rclone fixing the remote modified times.
  3008. Bug Fixes
  3009. Fix move command - stop it running for overlapping Fses - this was causing data loss.
  3010. Local
  3011. Fix incomplete hashes - this was causing problems for B2.
  3012. Amazon Drive
  3013. Rename Amazon Cloud Drive to Amazon Drive - no changes to config file needed.
  3014. Swift
  3015. Add support for non-default project domain - thanks Antonio Messina.
  3016. S3
  3017. Add instructions on how to use rclone with minio.
  3018. Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.
  3019. Skip setting the modified time for objects > 5GB as it isn't possible.
  3020. Backblaze B2
  3021. Add --b2-versions flag so old versions can be listed and retreived.
  3022. Treat 403 errors (eg cap exceeded) as fatal.
  3023. Implement cleanup command for deleting old file versions.
  3024. Make error handling compliant with B2 integrations notes.
  3025. Fix handling of token expiry.
  3026. Implement --b2-test-mode to set X-Bz-Test-Mode header.
  3027. Set cutoff for chunked upload to 200MB as per B2 guidelines.
  3028. Make upload multi-threaded.
  3029. Dropbox
  3030. Don't retry 461 errors.
  3031. v1.30 - 2016-06-18
  3032. New Features
  3033. Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables
  3034. Directory include filtering for efficiency
  3035. --max-depth parameter
  3036. Better error reporting
  3037. More to come
  3038. Retry more errors
  3039. Add --ignore-size flag - for uploading images to onedrive
  3040. Log -v output to stdout by default
  3041. Display the transfer stats in more human readable form
  3042. Make 0 size files specifiable with --max-size 0b
  3043. Add b suffix so we can specify bytes in --bwlimit, --min-size etc
  3044. Use "password:" instead of "password>" prompt - thanks Klaus Post and Leigh Klotz
  3045. Bug Fixes
  3046. Fix retry doing one too many retries
  3047. Local
  3048. Fix problems with OS X and UTF-8 characters
  3049. Amazon Drive
  3050. Check a file exists before uploading to help with 408 Conflict errors
  3051. Reauth on 401 errors - this has been causing a lot of problems
  3052. Work around spurious 403 errors
  3053. Restart directory listings on error
  3054. Google Drive
  3055. Check a file exists before uploading to help with duplicates
  3056. Fix retry of multipart uploads
  3057. Backblaze B2
  3058. Implement large file uploading
  3059. S3
  3060. Add AES256 server-side encryption for - thanks Justin R. Wilson
  3061. Google Cloud Storage
  3062. Make sure we don't use conflicting content types on upload
  3063. Add service account support - thanks Michal Witkowski
  3064. Swift
  3065. Add auth version parameter
  3066. Add domain option for openstack (v3 auth) - thanks Fabian Ruff
  3067. v1.29 - 2016-04-18
  3068. New Features
  3069. Implement -I, --ignore-times for unconditional upload
  3070. Improve dedupecommand
  3071. Now removes identical copies without asking
  3072. Now obeys --dry-run
  3073. Implement --dedupe-mode for non interactive running
  3074. --dedupe-mode interactive - interactive the default.
  3075. --dedupe-mode skip - removes identical files then skips anything left.
  3076. --dedupe-mode first - removes identical files then keeps the first one.
  3077. --dedupe-mode newest - removes identical files then keeps the newest one.
  3078. --dedupe-mode oldest - removes identical files then keeps the oldest one.
  3079. --dedupe-mode rename - removes identical files then renames the rest to be different.
  3080. Bug fixes
  3081. Make rclone check obey the --size-only flag.
  3082. Use "application/octet-stream" if discovered mime type is invalid.
  3083. Fix missing "quit" option when there are no remotes.
  3084. Google Drive
  3085. Increase default chunk size to 8 MB - increases upload speed of big files
  3086. Speed up directory listings and make more reliable
  3087. Add missing retries for Move and DirMove - increases reliability
  3088. Preserve mime type on file update
  3089. Backblaze B2
  3090. Enable mod time syncing
  3091. This means that B2 will now check modification times
  3092. It will upload new files to update the modification times
  3093. (there isn't an API to just set the mod time.)
  3094. If you want the old behaviour use --size-only.
  3095. Update API to new version
  3096. Fix parsing of mod time when not in metadata
  3097. Swift/Hubic
  3098. Don't return an MD5SUM for static large objects
  3099. S3
  3100. Fix uploading files bigger than 50GB
  3101. v1.28 - 2016-03-01
  3102. New Features
  3103. Configuration file encryption - thanks Klaus Post
  3104. Improve rclone config adding more help and making it easier to understand
  3105. Implement -u/--update so creation times can be used on all remotes
  3106. Implement --low-level-retries flag
  3107. Optionally disable gzip compression on downloads with --no-gzip-encoding
  3108. Bug fixes
  3109. Don't make directories if --dry-run set
  3110. Fix and document the move command
  3111. Fix redirecting stderr on unix-like OSes when using --log-file
  3112. Fix delete command to wait until all finished - fixes missing deletes.
  3113. Backblaze B2
  3114. Use one upload URL per go routine fixes more than one upload using auth token
  3115. Add pacing, retries and reauthentication - fixes token expiry problems
  3116. Upload without using a temporary file from local (and remotes which support SHA1)
  3117. Fix reading metadata for all files when it shouldn't have been
  3118. Drive
  3119. Fix listing drive documents at root
  3120. Disable copy and move for Google docs
  3121. Swift
  3122. Fix uploading of chunked files with non ASCII characters
  3123. Allow setting of storage_url in the config - thanks Xavier Lucas
  3124. S3
  3125. Allow IAM role and credentials from environment variables - thanks Brian Stengaard
  3126. Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon
  3127. Amazon Drive
  3128. Retry on more things to make directory listings more reliable
  3129. v1.27 - 2016-01-31
  3130. New Features
  3131. Easier headless configuration with rclone authorize
  3132. Add support for multiple hash types - we now check SHA1 as well as MD5 hashes.
  3133. delete command which does obey the filters (unlike purge)
  3134. dedupe command to deduplicate a remote. Useful with Google Drive.
  3135. Add --ignore-existing flag to skip all files that exist on destination.
  3136. Add --delete-before, --delete-during, --delete-after flags.
  3137. Add --memprofile flag to debug memory use.
  3138. Warn the user about files with same name but different case
  3139. Make --include rules add their implict exclude * at the end of the filter list
  3140. Deprecate compiling with go1.3
  3141. Amazon Drive
  3142. Fix download of files > 10 GB
  3143. Fix directory traversal ("Next token is expired") for large directory listings
  3144. Remove 409 conflict from error codes we will retry - stops very long pauses
  3145. Backblaze B2
  3146. SHA1 hashes now checked by rclone core
  3147. Drive
  3148. Add --drive-auth-owner-only to only consider files owned by the user - thanks Bj�rn Harrtell
  3149. Export Google documents
  3150. Dropbox
  3151. Make file exclusion error controllable with -q
  3152. Swift
  3153. Fix upload from unprivileged user.
  3154. S3
  3155. Fix updating of mod times of files with + in.
  3156. Local
  3157. Add local file system option to disable UNC on Windows.
  3158. v1.26 - 2016-01-02
  3159. New Features
  3160. Yandex storage backend - thank you Dmitry Burdeev ("dibu")
  3161. Implement Backblaze B2 storage backend
  3162. Add --min-age and --max-age flags - thank you Adriano Aur�lio Meirelles
  3163. Make ls/lsl/md5sum/size/check obey includes and excludes
  3164. Fixes
  3165. Fix crash in http logging
  3166. Upload releases to github too
  3167. Swift
  3168. Fix sync for chunked files
  3169. One Drive
  3170. Re-enable server side copy
  3171. Don't mask HTTP error codes with JSON decode error
  3172. S3
  3173. Fix corrupting Content-Type on mod time update (thanks Joseph Spurrier)
  3174. v1.25 - 2015-11-14
  3175. New features
  3176. Implement Hubic storage system
  3177. Fixes
  3178. Fix deletion of some excluded files without --delete-excluded
  3179. This could have deleted files unexpectedly on sync
  3180. Always check first with --dry-run!
  3181. Swift
  3182. Stop SetModTime losing metadata (eg X-Object-Manifest)
  3183. This could have caused data loss for files > 5GB in size
  3184. Use ContentType from Object to avoid lookups in listings
  3185. One Drive
  3186. disable server side copy as it seems to be broken at Microsoft
  3187. v1.24 - 2015-11-07
  3188. New features
  3189. Add support for Microsoft One Drive
  3190. Add --no-check-certificate option to disable server certificate verification
  3191. Add async readahead buffer for faster transfer of big files
  3192. Fixes
  3193. Allow spaces in remotes and check remote names for validity at creation time
  3194. Allow '&' and disallow ':' in Windows filenames.
  3195. Swift
  3196. Ignore directory marker objects where appropriate - allows working with Hubic
  3197. Don't delete the container if fs wasn't at root
  3198. S3
  3199. Don't delete the bucket if fs wasn't at root
  3200. Google Cloud Storage
  3201. Don't delete the bucket if fs wasn't at root
  3202. v1.23 - 2015-10-03
  3203. New features
  3204. Implement rclone size for measuring remotes
  3205. Fixes
  3206. Fix headless config for drive and gcs
  3207. Tell the user they should try again if the webserver method failed
  3208. Improve output of --dump-headers
  3209. S3
  3210. Allow anonymous access to public buckets
  3211. Swift
  3212. Stop chunked operations logging "Failed to read info: Object Not Found"
  3213. Use Content-Length on uploads for extra reliability
  3214. v1.22 - 2015-09-28
  3215. Implement rsync like include and exclude flags
  3216. swift
  3217. Support files > 5GB - thanks Sergey Tolmachev
  3218. v1.21 - 2015-09-22
  3219. New features
  3220. Display individual transfer progress
  3221. Make lsl output times in localtime
  3222. Fixes
  3223. Fix allowing user to override credentials again in Drive, GCS and ACD
  3224. Amazon Drive
  3225. Implement compliant pacing scheme
  3226. Google Drive
  3227. Make directory reads concurrent for increased speed.
  3228. v1.20 - 2015-09-15
  3229. New features
  3230. Amazon Drive support
  3231. Oauth support redone - fix many bugs and improve usability
  3232. Use "golang.org/x/oauth2" as oauth libary of choice
  3233. Improve oauth usability for smoother initial signup
  3234. drive, googlecloudstorage: optionally use auto config for the oauth token
  3235. Implement --dump-headers and --dump-bodies debug flags
  3236. Show multiple matched commands if abbreviation too short
  3237. Implement server side move where possible
  3238. local
  3239. Always use UNC paths internally on Windows - fixes a lot of bugs
  3240. dropbox
  3241. force use of our custom transport which makes timeouts work
  3242. Thanks to Klaus Post for lots of help with this release
  3243. v1.19 - 2015-08-28
  3244. New features
  3245. Server side copies for s3/swift/drive/dropbox/gcs
  3246. Move command - uses server side copies if it can
  3247. Implement --retries flag - tries 3 times by default
  3248. Build for plan9/amd64 and solaris/amd64 too
  3249. Fixes
  3250. Make a current version download with a fixed URL for scripting
  3251. Ignore rmdir in limited fs rather than throwing error
  3252. dropbox
  3253. Increase chunk size to improve upload speeds massively
  3254. Issue an error message when trying to upload bad file name
  3255. v1.18 - 2015-08-17
  3256. drive
  3257. Add --drive-use-trash flag so rclone trashes instead of deletes
  3258. Add "Forbidden to download" message for files with no downloadURL
  3259. dropbox
  3260. Remove datastore
  3261. This was deprecated and it caused a lot of problems
  3262. Modification times and MD5SUMs no longer stored
  3263. Fix uploading files > 2GB
  3264. s3
  3265. use official AWS SDK from github.com/aws/aws-sdk-go
  3266. NB will most likely require you to delete and recreate remote
  3267. enable multipart upload which enables files > 5GB
  3268. tested with Ceph / RadosGW / S3 emulation
  3269. many thanks to Sam Liston and Brian Haymore at the Utah Center for High Performance Computing for a Ceph test account
  3270. misc
  3271. Show errors when reading the config file
  3272. Do not print stats in quiet mode - thanks Leonid Shalupov
  3273. Add FAQ
  3274. Fix created directories not obeying umask
  3275. Linux installation instructions - thanks Shimon Doodkin
  3276. v1.17 - 2015-06-14
  3277. dropbox: fix case insensitivity issues - thanks Leonid Shalupov
  3278. v1.16 - 2015-06-09
  3279. Fix uploading big files which was causing timeouts or panics
  3280. Don't check md5sum after download with --size-only
  3281. v1.15 - 2015-06-06
  3282. Add --checksum flag to only discard transfers by MD5SUM - thanks Alex Couper
  3283. Implement --size-only flag to sync on size not checksum & modtime
  3284. Expand docs and remove duplicated information
  3285. Document rclone's limitations with directories
  3286. dropbox: update docs about case insensitivity
  3287. v1.14 - 2015-05-21
  3288. local: fix encoding of non utf-8 file names - fixes a duplicate file problem
  3289. drive: docs about rate limiting
  3290. google cloud storage: Fix compile after API change in "google.golang.org/api/storage/v1"
  3291. v1.13 - 2015-05-10
  3292. Revise documentation (especially sync)
  3293. Implement --timeout and --conntimeout
  3294. s3: ignore etags from multipart uploads which aren't md5sums
  3295. v1.12 - 2015-03-15
  3296. drive: Use chunked upload for files above a certain size
  3297. drive: add --drive-chunk-size and --drive-upload-cutoff parameters
  3298. drive: switch to insert from update when a failed copy deletes the upload
  3299. core: Log duplicate files if they are detected
  3300. v1.11 - 2015-03-04
  3301. swift: add region parameter
  3302. drive: fix crash on failed to update remote mtime
  3303. In remote paths, change native directory separators to /
  3304. Add synchronization to ls/lsl/lsd output to stop corruptions
  3305. Ensure all stats/log messages to go stderr
  3306. Add --log-file flag to log everything (including panics) to file
  3307. Make it possible to disable stats printing with --stats=0
  3308. Implement --bwlimit to limit data transfer bandwidth
  3309. v1.10 - 2015-02-12
  3310. s3: list an unlimited number of items
  3311. Fix getting stuck in the configurator
  3312. v1.09 - 2015-02-07
  3313. windows: Stop drive letters (eg C:) getting mixed up with remotes (eg drive:)
  3314. local: Fix directory separators on Windows
  3315. drive: fix rate limit exceeded errors
  3316. v1.08 - 2015-02-04
  3317. drive: fix subdirectory listing to not list entire drive
  3318. drive: Fix SetModTime
  3319. dropbox: adapt code to recent library changes
  3320. v1.07 - 2014-12-23
  3321. google cloud storage: fix memory leak
  3322. v1.06 - 2014-12-12
  3323. Fix "Couldn't find home directory" on OSX
  3324. swift: Add tenant parameter
  3325. Use new location of Google API packages
  3326. v1.05 - 2014-08-09
  3327. Improved tests and consequently lots of minor fixes
  3328. core: Fix race detected by go race detector
  3329. core: Fixes after running errcheck
  3330. drive: reset root directory on Rmdir and Purge
  3331. fs: Document that Purger returns error on empty directory, test and fix
  3332. google cloud storage: fix ListDir on subdirectory
  3333. google cloud storage: re-read metadata in SetModTime
  3334. s3: make reading metadata more reliable to work around eventual consistency problems
  3335. s3: strip trailing / from ListDir()
  3336. swift: return directories without / in ListDir
  3337. v1.04 - 2014-07-21
  3338. google cloud storage: Fix crash on Update
  3339. v1.03 - 2014-07-20
  3340. swift, s3, dropbox: fix updated files being marked as corrupted
  3341. Make compile with go 1.1 again
  3342. v1.02 - 2014-07-19
  3343. Implement Dropbox remote
  3344. Implement Google Cloud Storage remote
  3345. Verify Md5sums and Sizes after copies
  3346. Remove times from "ls" command - lists sizes only
  3347. Add add "lsl" - lists times and sizes
  3348. Add "md5sum" command
  3349. v1.01 - 2014-07-04
  3350. drive: fix transfer of big files using up lots of memory
  3351. v1.00 - 2014-07-03
  3352. drive: fix whole second dates
  3353. v0.99 - 2014-06-26
  3354. Fix --dry-run not working
  3355. Make compatible with go 1.1
  3356. v0.98 - 2014-05-30
  3357. s3: Treat missing Content-Length as 0 for some ceph installations
  3358. rclonetest: add file with a space in
  3359. v0.97 - 2014-05-05
  3360. Implement copying of single files
  3361. s3 & swift: support paths inside containers/buckets
  3362. v0.96 - 2014-04-24
  3363. drive: Fix multiple files of same name being created
  3364. drive: Use o.Update and fs.Put to optimise transfers
  3365. Add version number, -V and --version
  3366. v0.95 - 2014-03-28
  3367. rclone.org: website, docs and graphics
  3368. drive: fix path parsing
  3369. v0.94 - 2014-03-27
  3370. Change remote format one last time
  3371. GNU style flags
  3372. v0.93 - 2014-03-16
  3373. drive: store token in config file
  3374. cross compile other versions
  3375. set strict permissions on config file
  3376. v0.92 - 2014-03-15
  3377. Config fixes and --config option
  3378. v0.91 - 2014-03-15
  3379. Make config file
  3380. v0.90 - 2013-06-27
  3381. Project named rclone
  3382. v0.00 - 2012-11-18
  3383. Project started
  3384. Bugs and Limitations
  3385.  
  3386. Empty directories are left behind / not created
  3387.  
  3388. With remotes that have a concept of directory, eg Local and Drive, empty directories may be left behind, or not created when one was expected.
  3389.  
  3390. This is because rclone doesn't have a concept of a directory - it only works on objects. Most of the object storage systems can't actually store a directory so there is nowhere for rclone to store anything about directories.
  3391.  
  3392. You can work round this to some extent with thepurge command which will delete everything under the path, inluding empty directories.
  3393.  
  3394. This may be fixed at some point in Issue #100
  3395.  
  3396. Directory timestamps aren't preserved
  3397.  
  3398. For the same reason as the above, rclone doesn't have a concept of a directory - it only works on objects, therefore it can't preserve the timestamps of directories.
  3399.  
  3400. Frequently Asked Questions
  3401.  
  3402. Do all cloud storage systems support all rclone commands
  3403.  
  3404. Yes they do. All the rclone commands (eg sync, copy etc) will work on all the remote storage systems.
  3405.  
  3406. Can I copy the config from one machine to another
  3407.  
  3408. Sure! Rclone stores all of its config in a single file. If you want to find this file, the simplest way is to run rclone -h and look at the help for the --config flag which will tell you where it is.
  3409.  
  3410. See the remote setup docs for more info.
  3411.  
  3412. How do I configure rclone on a remote / headless box with no browser?
  3413.  
  3414. This has now been documented in its own remote setup page.
  3415.  
  3416. Can rclone sync directly from drive to s3
  3417.  
  3418. Rclone can sync between two remote cloud storage systems just fine.
  3419.  
  3420. Note that it effectively downloads the file and uploads it again, so the node running rclone would need to have lots of bandwidth.
  3421.  
  3422. The syncs would be incremental (on a file by file basis).
  3423.  
  3424. Eg
  3425.  
  3426. rclone sync drive:Folder s3:bucket
  3427. Using rclone from multiple locations at the same time
  3428.  
  3429. You can use rclone from multiple places at the same time if you choose different subdirectory for the output, eg
  3430.  
  3431. Server A> rclone sync /tmp/whatever remote:ServerA
  3432. Server B> rclone sync /tmp/whatever remote:ServerB
  3433. If you sync to the same directory then you should use rclone copy otherwise the two rclones may delete each others files, eg
  3434.  
  3435. Server A> rclone copy /tmp/whatever remote:Backup
  3436. Server B> rclone copy /tmp/whatever remote:Backup
  3437. The file names you upload from Server A and Server B should be different in this case, otherwise some file systems (eg Drive) may make duplicates.
  3438.  
  3439. Why doesn't rclone support partial transfers / binary diffs like rsync?
  3440.  
  3441. Rclone stores each file you transfer as a native object on the remote cloud storage system. This means that you can see the files you upload as expected using alternative access methods (eg using the Google Drive web interface). There is a 1:1 mapping between files on your hard disk and objects created in the cloud storage system.
  3442.  
  3443. Cloud storage systems (at least none I've come across yet) don't support partially uploading an object. You can't take an existing object, and change some bytes in the middle of it.
  3444.  
  3445. It would be possible to make a sync system which stored binary diffs instead of whole objects like rclone does, but that would break the 1:1 mapping of files on your hard disk to objects in the remote cloud storage system.
  3446.  
  3447. All the cloud storage systems support partial downloads of content, so it would be possible to make partial downloads work. However to make this work efficiently this would require storing a significant amount of metadata, which breaks the desired 1:1 mapping of files to objects.
  3448.  
  3449. Can rclone do bi-directional sync?
  3450.  
  3451. No, not at present. rclone only does uni-directional sync from A -> B. It may do in the future though since it has all the primitives - it just requires writing the algorithm to do it.
  3452.  
  3453. Can I use rclone with an HTTP proxy?
  3454.  
  3455. Yes. rclone will use the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY, similar to cURL and other programs.
  3456.  
  3457. HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
  3458.  
  3459. The environment values may be either a complete URL or a "host[:port]", in which case the "http" scheme is assumed.
  3460.  
  3461. The NO_PROXY allows you to disable the proxy for specific hosts. Hosts must be comma separated, and can contain domains or parts. For instance "foo.com" also matches "bar.foo.com".
  3462.  
  3463. Rclone gives x509: failed to load system roots and no roots provided error
  3464.  
  3465. This means that rclone can't file the SSL root certificates. Likely you are running rclone on a NAS with a cut-down Linux OS, or possibly on Solaris.
  3466.  
  3467. Rclone (via the Go runtime) tries to load the root certificates from these places on Linux.
  3468.  
  3469. "/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
  3470. "/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL
  3471. "/etc/ssl/ca-bundle.pem", // OpenSUSE
  3472. "/etc/pki/tls/cacert.pem", // OpenELEC
  3473. So doing something like this should fix the problem. It also sets the time which is important for SSL to work properly.
  3474.  
  3475. mkdir -p /etc/ssl/certs/
  3476. curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
  3477. ntpclient -s -h pool.ntp.org
  3478. Note that you may need to add the --insecure option to the curl command line if it doesn't work without.
  3479.  
  3480. curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
  3481. Rclone gives Failed to load config file: function not implemented error
  3482.  
  3483. Likely this means that you are running rclone on Linux version not supported by the go runtime, ie earlier than version 2.6.23.
  3484.  
  3485. See the system requirements section in the go install docs for full details.
  3486.  
  3487. All my uploaded docx/xlsx/pptx files appear as archive/zip
  3488.  
  3489. This is caused by uploading these files from a Windows computer which hasn't got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions' file formats
  3490.  
  3491. License
  3492.  
  3493. This is free software under the terms of MIT the license (check the COPYING file included with the source code).
  3494.  
  3495. Copyright (C) 2012 by Nick Craig-Wood http://www.craig-wood.com/nick/
  3496.  
  3497. Permission is hereby granted, free of charge, to any person obtaining a copy
  3498. of this software and associated documentation files (the "Software"), to deal
  3499. in the Software without restriction, including without limitation the rights
  3500. to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
  3501. copies of the Software, and to permit persons to whom the Software is
  3502. furnished to do so, subject to the following conditions:
  3503.  
  3504. The above copyright notice and this permission notice shall be included in
  3505. all copies or substantial portions of the Software.
  3506.  
  3507. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
  3508. IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
  3509. FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
  3510. AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
  3511. LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
  3512. OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
  3513. THE SOFTWARE.
  3514. Authors
  3515.  
  3516. Nick Craig-Wood nick@craig-wood.com
  3517. Contributors
  3518.  
  3519. Alex Couper amcouper@gmail.com
  3520. Leonid Shalupov leonid@shalupov.com
  3521. Shimon Doodkin helpmepro1@gmail.com
  3522. Colin Nicholson colin@colinn.com
  3523. Klaus Post klauspost@gmail.com
  3524. Sergey Tolmachev tolsi.ru@gmail.com
  3525. Adriano Aur�lio Meirelles adriano@atinge.com
  3526. C. Bess cbess@users.noreply.github.com
  3527. Dmitry Burdeev dibu28@gmail.com
  3528. Joseph Spurrier github@josephspurrier.com
  3529. Bj�rn Harrtell bjorn@wololo.org
  3530. Xavier Lucas xavier.lucas@corp.ovh.com
  3531. Werner Beroux werner@beroux.com
  3532. Brian Stengaard brian@stengaard.eu
  3533. Jakub Gedeon jgedeon@sofi.com
  3534. Jim Tittsler jwt@onjapan.net
  3535. Michal Witkowski michal@improbable.io
  3536. Fabian Ruff fabian.ruff@sap.com
  3537. Leigh Klotz klotz@quixey.com
  3538. Romain Lapray lapray.romain@gmail.com
  3539. Justin R. Wilson jrw972@gmail.com
  3540. Antonio Messina antonio.s.messina@gmail.com
  3541. Stefan G. Weichinger office@oops.co.at
  3542. Per Cederberg cederberg@gmail.com
  3543. Radek �enfeld rush@logic.cz
  3544. Contact the rclone project
  3545.  
  3546. The project website is at:
  3547.  
  3548. https://github.com/ncw/rclone
  3549. There you can file bug reports, ask for help or contribute pull requests.
  3550.  
  3551. See also
  3552.  
  3553. Google+ page for general comments
  3554. Or email Nick Craig-Wood
Add Comment
Please, Sign In to add comment