Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations pierreick on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Ill-advised research on Linux kernel lands computer scientists in hot water 2

Status
Not open for further replies.
Replies continue below

Recommended for you

It seems like a dumb thing to do, but on the other hand it's not a bad idea to do tests for vulnerabilities.
 
It doesnt sound to me like the Linux folks are against testing for vulnerabilities, I believe they simply want it done out in the open and in a professional manner. Had the UM folks voiced a concern publicly while offering their patch as a test I suspect the reaction would've been different.

IME with various change process boards/engineers/managers, even arrogant process champions generally welcome testing their process. I dont imagine they would be too happy however if someone knowingly steered a flawed product through the process just to gain notoriety.
 
Had the UM folks voiced a concern publicly while offering their patch as a test I suspect the reaction would've been different.

The issue is whether complacency exists in the review process; if someone comes and says they're going to test for complacency, that a priori ruins the experiment, since the foreknowledge ostensibly removes the complacency that might have existed.

I dont imagine they would be too happy however if someone knowingly steered a flawed product through the process just to gain notoriety.

It's unclear that this was the intent. Obviously, they wanted to publish, given that they did indeed have an agenda to prove that "hypocrite commits" were a real thing. The Linux community had all this kumbaya over its lifetime, but I think they're mostly protesting because their bubble had been burst. Frankly, I think the researchers did them a favor; had there actually been malicious actors at play, there could have been a malware/Trojan horse laden version of Linus in the wild and it might have been the next zero-day exploit in the news. If I were such an actor, I would be playing a long game of offering legitimate fixes and upgrades over a period of time, before I slipped in some rogue code.


TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
They could have played with the kernel in a 'sandbox' and let the linux guys know about the results... what they did was foolhardy at best.

Rather than think climate change and the corona virus as science, think of it as the wrath of God. Feel any better?

-Dik
 
They could have played with the kernel in a 'sandbox'

They weren't trying to find vulnerabilities in the code, they were trying to introduce vulnerabilities in the code through the code submission review process, so there is no "sandbox."

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
They could have done that in the sandbox... just like they were playing with the real kernel... no difference... just location. What they did was irresponsible IMHO.

Rather than think climate change and the corona virus as science, think of it as the wrath of God. Feel any better?

-Dik
 
The issue is whether complacency exists in the review process; if someone comes and says they're going to test for complacency, that a priori ruins the experiment, since the foreknowledge ostensibly removes the complacency that might have existed.

The entire point of having a process is to remove the human element, so prior knowledge changes nothing. If your process has a human element then its inherently flawed.
 
I've been out of the Linux world for several years, but if memory serves the whole kernel updating process is very human-centric.

A user envisions some new piece of code to update, patch, or otherwise modified the Linux kernel; they write the code; the submit it to the parties responsible for maintaining that portion of the kernel; it gets reviewed by the maintainers and any other developers subscribed to that mailing list; after the review and comment period the code is either incorporated or discarded.

As a free and open source package "maintained" by thousands of developers all over the world, I'm not sure how you remove people from the process. In fact, the scale of involvement is intended to dilute the human error component, with the idea being that SOOOOO many people are looking at it, somebody is bound to find a problem if it exists. And that's what these people were trying to show. That the worldwide review process has a flaw. It's not just any flaw, though, they are showing that the fundamental principal on which so many Linux users operate is flawed. You don't just go to an amorphous group of people and say "Um, yes, hello...your entire operational concept is fundamentally flawed. You should fix it." and expect a meaningful response.

So I agree with IRstuff here that the only practical sandbox to test it in is THE sandbox. I suppose you could randomly find a cross section of the Linux developers out there of various skills and experience and have them review a piece of code, but how do remove the bias of these people knowing they're involved in research of some kind which would likely increase their awareness and how do you prove to the rest of this ecosystem that the fact that you slipped your malware by these 30 people means that it would slip by the 30,000 who might have an opportunity to look at it if you released it into the wild (I'm making up numbers - a quick google search didn't give me anything useful)?

There's another problem - if you go through the trouble of proving that this is a problem but fail to convince the ecosystem that it's real, all you've done is pointed malefactors to a vulnerability.

So was this a good idea? No, probably not. But is there a plausible end goal with good intentions without a good alternative? I'd say yes.
 
IRstuff said:
Frankly, I think the researchers did them a favor; had there actually been malicious actors at play, there could have been a malware/Trojan horse laden version of Linus in the wild and it might have been the next zero-day exploit in the news. If I were such an actor, I would be playing a long game of offering legitimate fixes and upgrades over a period of time, before I slipped in some rogue code.
Replace "Linux" with "SolarWinds", and you got yerself a real problem, eh? [wink]

Dan - Owner
Footwell%20Animation%20Tiny.gif
 
I am not sure you can even have secure code without restricting additions heavily, being very opaque and esoteric, and essentially crippling almost all development.

I think the people who did what they did are being criticized because they put egg on some people's faces who over overplayed claims of security. I can guarantee if these two students did this, bad actors have already done the same.
 
Fischstabchen,

The standard argument for open source software is that many eyeballs watch the code. If the source code is closed, you cannot see incompetence, or maliciousness. I read somewhere a few years back that the Chinese and French governments are investigating GNU/Linux as official operating systems, because Microsoft Windows is the work of Damn Foreigners.

--
JHG
 
Drawoh,

There isn't enough eyeballs for the amount of code. It would be impossible to do any real audit of the operating system. Having many eyeballs from several different groups is good but I don't think people understand the undertaking it would be to validate the security of millions of lines of code. It is such a monumental task that I doubt there has ever been anything of significance that hasn't been riddled with bugs and security holes. Why does anyone think that bug free or secure code could exist when nobody would think it possible to have a library books without any typos?
 
Fischstabchen,

This is why everybody monitors bug reports.

Nobody monitors the whole code. Each programmer maintains one or to programs within the OS.

--
JHG
 
Drawoh,

I don't think monitoring the bug reports is enough. My general point is that it is nearly impossible to have the resources to be fully secure. Honestly, the security of an operating system should be a multi-national effort with billions of dollars spent on annually on validation and security verification. National hacking efforts cut through nearly everything because the resources are there. The Linux folks don't have squat in comparison. I am not saying they are bad but why are we expecting them to find holes quicker than groups that have a hundred or thousand time more resources? It would be in the public interest to have multi-national agencies, with billions of dollars in resources, tightening up loopholes, bugs, and security flaws based on what happened in Ukraine, Georgia, and recently with U.S. pipelines.


cybercrime-clock-graphic_v1uqok.jpg
 
and the numbers will get bigger...

Rather than think climate change and the corona virus as science, think of it as the wrath of God. Feel any better?

-Dik
 
My view...

Codebases are entirely too large for an individual (or even a small team) to keep track of, let alone modify, every bit of code. Allowing anyone and their uncle access to view and modify the code is reasonable... but each piece must be apportioned to a handful of "trusted" folks for verificaton, something severely lacking in the current model (and may never be resolved as long as the model relies strictly on volunteers). As with any trust system, there are bound to be bad actors, no way around it, but the people who are currently allowed to verify edits are being trusted due mainly to their willingness to stay with a project long term and be "known" to the others (rather than a more rigorous vetting system).

Look at SolarWinds... code changes were downloaded practically on a nightly basis from an open-source codebase, yet no one was set as the gatekeeper to verify those changes were not potentially malicious. That was just careless from a security standpoint.

Dan - Owner
Footwell%20Animation%20Tiny.gif
 
MacGyver,

I think the fundamental problem with software development is that it isn't disciplined and codefided. Without mandated practices for software development, it will be impossible to get people to write or even be aware of what is secure code and to get customers to pay for more secure code. Customers are not going to like code taking much longer to develop and cost more when security is generally not a visible component to the customer's experience. Unsecure and secure code functions identically to the customer. The problems are systemic because nearly any large company with credit card information gets breached so it is easy to say that there are fundamental problems in understanding and selling security to customers.
 
I think the people who did what they did are being criticized because they put egg on some people's faces who over overplayed claims of security.

No, they're being criticized bc what they did was highly unprofessional, unethical, and also paid for with the public's tax dollars.

There are professional ways of voicing potential concerns and publishing a report smearing others isnt one of them. You'd be pretty pissed if a coworker put up a billboard publicly blasting you for a potential failing. Standard ethical practice for engineers (and most other professions) with concerns over another's work is to review your concern with them directly.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor