This could be caused by using too many threads (or more precisely goroutines, since restic is written in GO) in its local storage backend implementation. It is interesting that restic, while being the second fastest, consumed far more CPU times than the elapsed real times, which is bad for the user case where users want to keep the backup tool running in the background to minimize the interference with other tasks. Here are the elapsed real times (in seconds) as reported by the time command, with the user CPU times and system CPU times in the parentheses:Ĭlearly Duplicacy was the winner by a comfortable margin. Details can be found in linux-backup-test.sh.īackups were all saved to a storage directory on the same hard disk as the code base, to eliminate the performance variations introduced by different implementation of networked or cloud storage backends. The code base is then moved forward to these commits one by one to emulate incremental changes. After the initial backup was finished, other commits were chosen such that they were about one month apart. To test incremental backup, a random commit on July 2016 was selected, and the entire code base is rolled back to that commit. Its size is 1.76 GB with about 58K files, so it is a relatively small repository consisting of small files, but it represents a popular use case where a backup tool runs alongside a version control program such as git to frequently save changes made between checkins. The first dataset is the Linux code base mostly because it is the largest github repository that we could find and it has frequent commits (good for testing incremental backups). Enabled by -e repokey-blake2 which is only available in 1.1.0+ Backing up the Linux code base It was set it to 1MB to match that of restic The chunk size in Duplicacy is configurable with the default being 4MB. The following table lists several important configuration parameters or algorithms that may have significant impact on the overall performance. SetupĪll tests were performed on a Mac mini 2012 model running macOS Sierra (10.12.3), with a 2.3 GHZ Intel i7 4-core processor and 16 GB memory. Therefore, results presented here should not be viewed as conclusive until they are independently confirmed by other people. It is highly possible that configurations for other tools may not be optimal. DisclaimerĪs the developer of Duplicacy, I have little first-hand experience with other tools, other than setting them up and running for these experiments for the first time for this performance study. Contribution Guide, Module READMEs, Wiki Docs etc.To benchmark the performance and storage efficiency of 4 backup tools, Duplicacy, restic, Attic, and duplicity, using datasets that are publicly available. Updated relevant and associated documentation (e.g.ValidateMcCloud (Base validation in Azure China Cloud).ValidateAzCloud (Base validation in Azure Cloud).Updated one or more of the following tests (if required).Performed testing and provided evidence.Ensured my code/branch is up-to-date with the latest changes in the main branch.(ALZ Bicep Core Team Only) Associated it with relevant ADO Items.Associated it with relevant GitHub Issues.bicep file/s I am adding/editing are using the latest API version possible Read the Contribution Guide and ensured this PR is compliant with the guide.Replace this with any testing evidence to show that your Pull Request works/fixes as described and planned (include screenshots, if appropriate). This PR changes/updates the document urls to the latest source of information.It updates the reference links from - to This PR fixes/adds/changes/removes This PR has updates to the references in the documentation. Updated references in the documents from - to learn.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |