The biggest problem I find is that it seems to be pretty "outdated" to keep redirects in place, if you move stuff. So many links to news websites, etc. will cause a redirect to either / or a 404 (which is a very odd thing to redirect to in my opinion).
If you are unlucky an article you wanted to find also completely disappeared. This is scary, because it's basically history disappearing.
I also wonder what will happen to text on websites that are some ajax and javascript breaks because a third party goes down. While the internet archive seems to be building tools for people to use to mitigate this I found that they barely worked on websites that do something like this.
Another worry is the ever-increasing size of these scripts making archiving more expensive.
You can often pop the URL into the Wayback Machine to bring up the last live copy. It's better at handling dynamic stuff the more recent it is. Older stuff, especially early AJAX pages, are just gone because the crawler couldn't handle it at the time. It's far from a perfect solution, especially in light of the big publishers finally getting their excuse to go after the Internet Archive legally. It's a good silo, but just as vulnerable as any other.
If you are unlucky an article you wanted to find also completely disappeared. This is scary, because it's basically history disappearing.
I also wonder what will happen to text on websites that are some ajax and javascript breaks because a third party goes down. While the internet archive seems to be building tools for people to use to mitigate this I found that they barely worked on websites that do something like this.
Another worry is the ever-increasing size of these scripts making archiving more expensive.