It's possible to find probable duplicate files with the S3 CLI based on size and tags - I was working a script to do just that but I haven't finished it yet. Alternatively if you want exact backups of your computer you can use the --delete flag which will delete files in the bucket that aren't in the source.
I agree this is not the absolute most optimized solution but it does work quite well for me and is easily extendible with other scripts and S3 CLI commands. Theoretically if Borgbackup or Duplicity are backing up to S3 they're using all the same commands as the S3 CLI/SDK.
> Theoretically if Borgbackup or Duplicity are backing up to S3 they're using all the same commands as the S3 CLI/SDK.
They are not. Both Borg and Duplicity pack files into compressed, encrypted archives before uploading them to S3; "s3 sync" literally just uploads each file as an object with no additional processing.
If I have to choose between hacking together a bunch of shell scripts to do my deduplicated, end-to-end encrypted backups, vs using a popular open source well-tested off the shelf solution, I know which one I'm picking!
I agree this is not the absolute most optimized solution but it does work quite well for me and is easily extendible with other scripts and S3 CLI commands. Theoretically if Borgbackup or Duplicity are backing up to S3 they're using all the same commands as the S3 CLI/SDK.
Besides, shell scripting is fun!