Guest User

Untitled

a guest
May 23rd, 2018
80
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.07 KB | None | 0 0
  1. I had a really interesting journey today with a thorny little challenge I had while trying to delete all the files in a s3 bucket with tons of nested files.
  2. The bucket path (`s3://buffer-data/emr/logs/`) contained log files created by ElasticMapReduce jobs that ran every day over a couple of years (from early 2015 to early 2018).
  3.  
  4. Each EMR job would run hourly every day, firing up a cluster of machines and each machine would output it's logs.
  5. That resulted thousands of nested paths (one for each job) that contained thousands of other files.
  6. I estimated that the total number of nested files would be between 5-10 million.
  7.  
  8. I had to estimate this number by looking at samples counts of some of the nested directories, because getting the true count would mean having to recurse through the whole s3 tree which was just too slow. This is also exactly why it was challenging to delete all the files.
  9.  
  10. Deleting all the files in a s3 object like this is pretty challenging, since s3 doesn't really work like a true file system.
  11. What we think of as a files's parent 'directory' in s3 is basically just a prefix that's associated with that stored object.
  12.  
  13. The parent directory object has no knowledge of the files it 'contains', so you can't just delete the parent directory and clean up all the files within it.
  14.  
  15. To delete all the files in a s3 'directory', you can use the `aws` command line with the `recursive` flag
  16.  
  17. `aws s3 rm --recursive s3://buffer-data/emr/logs`
  18.  
  19. When I tried running this command on my bucket, I left it running for over 24 hours, only to find that it ended up deleting a fraction of the data.
  20.  
  21. The problem was that the aws command would only delete a 1000 objects at a time maximum, that it all happens in sequence. I didn't know exactly how long it would take finish since I couldn't even accurately tell how many log files there where, but I knew it would take days, and I couldn't wait that long.
  22.  
  23. So I had to find another way. After some digging in the documentation, it didn't seem like there was any way to force the `aws` command to execute in parallel, but luckily the shell has us covered.
  24.  
  25. To hack together a way to delete nested files faster, I used a combination of the `aws s3 ls` command and `xargs` (with a bit of `sed` to help with some text formatting).
  26.  
  27. Here is the one-liner I came up with.
  28.  
  29. `aws s3 ls s3://buffer-data/emr/dp-logs/ | grep df | sed -e 's/PRE /s3:\/\/buffer-data\/emr\/dp-logs\//g' | xargs -L1 -P 0 aws s3 rm --recursive`
  30.  
  31. Let me break that down a bit. The `aws s3 ls` command will just list all the nested objects with the `dp-logs` prefix (because I don't specify the `--recursive` flag, it won't recursive those further, which would also take a really long time to finish).
  32.  
  33. All the directories with logs in them started with a `df` prefix, which is why I pipe the output of the the `ls` command through a `grep df` command to filter them out.
  34.  
  35. To a actually run a `aws s3 rm` command for each one of the nested directories, I used the `xargs` command. But to get that to work, I first had to a little cleanup of the output of the `ls` command.
  36. The output looks like this:
  37.  
  38. ```
  39. PRE df-Y43SNR3SQOJ4/
  40. ```
  41.  
  42. Notice that it just contains the object name without the full prefix. That is easy to fix with `sed`:
  43.  
  44. `sed -e 's/PRE /s3:\/\/buffer-data\/emr\/dp-logs\//g'`
  45.  
  46. This turns the output into this:
  47.  
  48. ```
  49. s3://buffer-data/emr/dp-logs/df-Y43SNR3SQOJ4/
  50. ```
  51.  
  52. Finally I can then pipe this output into `xargs` to run a `aws s3 rm` command for each of the nested directories.
  53. But why go through all of that? The key reason why is that although `xargs` will by default run each command in sequence, you can change that by specifying the `-P` flag.
  54.  
  55. ```
  56. xargs -L1 -P 0 aws s3 rm --recursive`
  57. ```
  58.  
  59. Setting `-P 0` will run as many processes as it can at once.
  60. When I ran this on my laptop at first, it brought everything else on my machine to a halt, so I fired up a beefy machine on EC2 (with 8 cores) instead, set up the `aws` command line on there and let it run from there.
  61.  
  62. And presto! That's all I needed to do to get a job that could have taken days to run to finish within a couple of hours instead!
Add Comment
Please, Sign In to add comment