Microsoft says that the SolarWinds hackers were able to view some of the company’s source code.
The news comes as Microsoft continues to investigate the massive SolarWinds attack, which saw hackers corrupt downloads of data management software called SolarWinds. The move resulted in malicious code being installed at various U.S. government agencies and tech companies. That malicious code may have allowed the hackers to monitor operations for up to nine months before they were detected.
It’s worth noting the Canadian government says that despite using SolarWinds’ products in several government agencies and departments, it hasn’t found any compromised security related to the attack.
In an update shared by Microsoft’s Security Response Center, the company explained that it discovered infiltration of systems “beyond just the presence of malicious SolarWinds code.” That deeper infiltration allowed hackers to “view source code in a number of source code repositories.” However, the hacked account that granted access to the repositories didn’t have permission to modify any code or systems. In other words, while the hackers were able to view the code, it sounds like they couldn’t modify it to spread malicious code to other Microsoft systems.
Despite going deeper than the company initially thought, Microsoft does say it found “no evidence of access to production services or customer data.” Additionally, the ongoing investigation has so far “found no indications that [Microsoft’s] systems were used to attack others.”
Microsoft claims exposed source code doesn’t elevate risk
On top of that, Microsoft explained in the update that it assumes other people can view its source code, even though the code isn’t open-source and chances are people can’t see it. In doing so, the company says that it doesn’t rely on the secrecy of its code to keep it secure and claims that hackers gaining access to the source code, therefore, doesn’t elevate the risk for users. However, Microsoft didn’t disclose how much code the hackers viewed or what code was exposed.
While it’s good that Microsoft doesn’t rely on source code secrecy for security, I’m not sure that eliminates all risks tied to exposing the source code. One benefit of open-source software is that anyone can check the code to see what it does, which can often mean a community of people monitors code for issues. In a situation like this with closed source code, only Microsoft can view and check the code, which ultimately means users need to trust that Microsoft has fully vetted the code, not missed anything, or misled anyone about the hack’s impact. Coupled with not knowing what code was exposed, it’s hard to judge the level of concern users should have.
Finally, Microsoft notes that what it has learned so far during its investigation leads the company to believe the attack was carried out by a “very sophisticated nation-state actor.” The U.S. government, on the other hand, has implicated Russia in the attack.
Regardless, the attack’s massive scale and depth mean it will be months before we know the true impact. Microsoft’s latest disclosure is just one example of that. With any cyberattack, it can take time to uncover all the effects — the bigger the attack, the more time it takes.