The accused NSA leaker used "web crawler" software, which is widely available, to methodically collect a huge collection of data while he went about his duties as an IT contractor for the super-secret agency.
A senior intelligence official told the Times the process was "quite automated" and probably could have been easily detected. "We do not believe this was an individual sitting at a machine and downloading this much material in sequence," the official said.
Web crawler software is also known as a spider and basically moves from Web site to Web site via embedded links in documents. The programs can be directed to copy whatever information it comes across.
Investigators told Congress last week that Snowden's crawlers accessed about 1.7 million NSA documents during their forays. Technical experts told the Times they suspect some of the documents were downloaded automatically rather than being copied on Snowden's orders.
Officials also told the Times Snowden was able to snoop around sensitive areas because he was working out of a small NSA branch office in Hawaii that had not yet received the cybersecurity upgrades other offices had.
But officials also told the Times that Snowden's activities had been questioned by his supervisors "a few times." Snowden, however, avoided trouble by saying he was performing basic maintenance in line with his position as a system administrator.
Boston schools pull out free condoms over wrapping complaints
Millions of Getty images now available for free via embed tool