Security

‘Organisations must own the security implications of third party code in their software’ : Mike Hanley, GitHub – The Hindu


When Microsoft’s AI coding assistant was released in 2021, it ushered in a new era of coding done with a few simple prompts. As a result, GitHub’s business ballooned due to the now-popular AI pair programmer, Copilot. Currently, there are about 33 million developers on GitHub in the U.S. and India alone. Surprisingly, for a community this big and open, security wasn’t a focal point until very recently.

In an exclusive interaction with The HinduMike Hanley, the first Chief Security Officer for GitHub, spoke about how AI helped the platform thwart a hack that could have exposed millions of Linux systems.

Edited excerpts below:

THB: What took GitHub so long to hire a security head and what are some of the changes in your brought to the platform?


MH: When I was hired, it was really for security to be more of a leading function, not just for the company but in the broader ecosystem. So, when I came in, we brought all the security functions under one roof, day-to-day top-level functions inside the company reporting directly to the CEO, because that’s what we think it should be. We tripled the amount of investment that we had in the team from a headcount perspective. We have a good-sized security programme and capabilities and I’m really proud of the work that the team is doing. As to why wasn’t there one before I can’t really speak to that. But I can say that we literally tripled down on that to create a positive impact not just in GitHub but for all the customers who are depending on us in the broader software ecosystem.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

THB: What are some of the biggest challenges your team faces?


MH: What’s interesting is it’s not just that the pace at which changes are happening, but the pace also accelerates all the time. It’s funny to think that ChatGPT came out only about 18 months ago. It feels like several years at this point, right?

This pace of change is only going to get faster. The big transition that I think people need to look for is how you leverage the benefits of that while smartly managing risk. Obviously, we’re big believers in AI – we have wildly successful products in the AI space like GitHub Copilot which is the first AI pair programmer that’s helping well over a million users be more productive and more secure while they’re writing code.

But we’re also adopting AI in different places in our business, including things like making it easier to make strides which seem trivial. As it turns out, making consistent strides across the company could actually be pretty difficult. 

It’s also a lot of figuring out how we apply things that we know how to do well like assess how vendors use our data to new problems like AI without necessarily reinventing the wheel. I’ve seen instances where they think they don’t know how to assess it so they just say no to it. But we actually know how to ask questions about data flow diagrams, threat models, and vendor security. These are things that we’ve been doing for decades and it’s just applying them to a new setting and context. 

THB: In April, there was an incident of a backdoor discovery that was made in the XZ repository which could have affected millions of Linux computers. It turned out that there was an engineer, not from the security team, who was just curious and he was able to save the day. What are some of the lessons learnt from this case?


MH: The good news is that security has been getting more cooperative and it needs to continue to be collaborative, especially where you have security teams working closely with engineers to understand their work, support it and make security as easy as possible. I, along with the company, believe that security starts with the developer. You want to build everything around keeping them secure and making it easy for them to make good security decisions. Also, organizations which incorporate third party code or open source code into their software even though they didn’t write it still own the security implications for it. So, they need to be responsible for how they utilize it.


Here, since the code was open source, it allowed the developer who was investigating the problem to identify it. But we need better capabilities as a broader ecosystem to invest in software supply chain security.

THB: Do incidents like these create a sense of fear?


MH: I think it definitely can in some places. That’s why it’s important to look carefully at the takeaways from instances like this.For a lot of ransomware incidents you just need to be better at catching that vulnerability, which a lot of organizations are not good at. A lot of phishing incidents and account takeover campaigns tell us we still have a long way to go as an industry in getting people to adopt secure authentication technologies like two factor and passkeys. GitHub is working on helping developers drive adoption and help address issues like that.

This is a Premium article available exclusively to our subscribers. To read 250+ such premium articles every
month

You have exhausted your free article limit.
Please support quality journalism.

You have exhausted your free article limit.
Please support quality journalism.

This is your last free article.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.