Researchers Secretly Tried To Add Vulnerabilities To Linux Kernel, Ended Up Getting Banned

Researchers Secretly Tried To Add Vulnerabilities to Linux Kernel, Ended Up Getting Banned

The Linux kernel is one of the largest software projects in the modern history; with a gigantic 28 millions lines of code.

Contributors from all over the world and from different fields submit a large number of patches each day to the Linux kernel maintainers, so that they get reviewed before being officially merged to the official Linux kernel tree.

These patches could help fix a bug or a minor issue in the kernel, or introduce a new feature.

However, some contributors have been caught today trying to submit patches stealthily containing security vulnerabilities to the Linux kernel, and they were caught by the Linux kernel maintainers.

Researchers from the US University of Minnesota were doing a research paper about the ability to submit patches to open source projects that contain hidden security vulnerabilities in order to scientifically measure the probability of such patches being accepted and merged. Which could make the open source projects vulnerable to various attacks.

They used the Linux kernel as one of their main experiments, due to its well-known reputation and adaptation around the world.

These researchers submitted patches which didn’t seem to completely fix the related issues in the kernel, but also didn’t right away seem to introduce a security vulnerability. A number of these patches they submitted to the kernel were indeed successfully merged to the Linux kernel tree.

However, today, they were caught by Linux kernel maintainers, and were publicly humiliated. In an email by Greg Kroah-Hartman, one of the major Linux kernel maintainers, their approach was disclosed and their so-called “newbie patches” were thrown under the bus:

You, and your group, have publicly admitted to sending known-buggy patches to see how the kernel community would react to them, and published a paper based on that work.

Now you submit a new series of obviously-incorrect patches again, so what am I supposed to think of such a thing?

Browse FOSS Post ad-free when you subscribe to Patreon

Buy Some Coffee

Apparently, Greg and a number of other maintainers were not happy about this, as these experiments consume their time and efforts and make people engage by bad faith in the Linux kernel development:

Our community does not appreciate being experimented on, and being “tested” by submitting known patches that are either do nothing on purpose, or introduce bugs on purpose. If you wish to do work like this, I suggest you find a different community to run your experiments on, you are not welcome here.

Finally, Greg announced that the Linux kernel will ban all contributions from the University of Minnesota, and that all the patches they previously submitted are going to be removed from the kernel, and no new patches will be accepted from them in the future:

Because of this, I will now have to ban all future contributions from your University and rip out your previous contributions, as they were obviously submitted in bad-faith with the intent to cause problems.

The research paper they worked on was published back in February, 2021; around two months ago. In the paper, they disclose their approach and methods that they used to get the vulnerabilities inserted to the Linux kernel and other open source projects.

They also claim that the majority of the vulnerabilities they secretly tried to introduce to various open source projects, were successful in being inserted by around an average of %60:

2021-04-21_15-41

It is still unclear at this moment what other open source projects they tried to hijack, and what is the actual number of vulnerabilities they succeeded in inserting to various open source projects.

Greg has sent another email in which he reverts most patches from the University of Minnesota from the Linux kernel, and puts some of them on hold.

Discussion: What do you think about this approach? Do you think that the researcher’s attitudes were justified in favor of science and security? Or do you think that the Linux kernel maintainers were right in banning them from the kernel, and that this approach should not be encouraged?

Browse FOSS Post ad-free when you subscribe to Patreon

Buy Some Coffee

Challenge your knowledge, and take a quiz.

Go to FOSS Quiz

Comments are moderated before they are published on FOSS Post to keep discussions civil and prevent spam.

People reacted to this story.
Show comments Hide comments
Comments to: Researchers Secretly Tried To Add Vulnerabilities to Linux Kernel, Ended Up Getting Banned
  • April 21, 2021

    Hell yeah!! We don’t need this shit! We already have a Solar Winds issue, then the exploit tools of Fire Eye were stolen. These people are insane. Concerted effort to make everything vulnerable???

    Reply
  • April 21, 2021

    Did they plan on fixing their mistakes after their paper? They should have immediately reverted the patches they sent out. This is all very fishy.

    Reply
  • April 21, 2021

    I consider Greg’s response to be exceedingly magnanimous. As it could upset many servers of commercial institutions they could face unlimited damages for this exploit if a suitable server owner took legal action for ‘deliberate destruction’ of know public software.

    Reply
  • April 21, 2021

    The deserve worse. The university has an ethics committee that signed off on this! This is so unethical, I also suspect that this is illegal under the computer fraud and abuse act. They should be imprisoned.

    Reply
  • April 21, 2021

    They removed the flaws before the code was committed.

    Reply
  • April 21, 2021

    Black balled is the right approach here. And yeah, the system worked. Their bad-faith actions were caught…but punitive steps may make others think twice in the future. bravo.

    Reply
  • April 21, 2021

    Yeah… this doesn’t sound legal at all.

    Reply
  • April 21, 2021

    This is what jumped out at me as well, not only is the approach unethical, but they way they executed it seems to have been seriously flawed as well.

    Reply
  • April 22, 2021

    I’ve secretly poisoned several city water supply plants with a 2 part poison to test whether or not the cities test for unknown chemicals.

    When Caught I can just say I was doing scientific research! What nonsense. Who approved this! And what other open source projects were affected!

    Imagine trying to break into a building and when caught say you were just testing their security.

    Reply
    • April 22, 2021

      This is worst than that. In this case, it would be comparable to a known chemical imbalance in a water supply.

      In order to show that this imbalance exists they purposely alter the levels, it is ludicrous!

      Reply
  • April 22, 2021

    Honestly, after this, I wouldn’t be surprised if the journal the paper was published in forces the authors to retract it. It never looks good for a journal when your authors are using unethical methods.

    Reply
  • April 22, 2021

    That university just lost all credibility for its CS department, and likely several adjacent departments; and what’s worse, this is baby stuff that their ethics review board should have caught.

    The whole board needs to go, the students should be expelled, and any faculty who were privy to this need to be reviewed. Following that, they need to track down Greg, get on their hands and knees, and beg him to make a deal with them to remedy this.

    And Greg Kroah-Hartman? He probably shouldn’t talk to them at all after this embarrassment.

    Reply
  • April 22, 2021

    It’s very shocking to me that the UM ethics board would approve this. Additionally, the paper being published at an IEEE Symposium of all places, troubling.

    The researcher duo already received substantial pushback in December. Now with publications writing on this I can only wonder as to how the University (whose members are now barred from contribution) and the IEEE will react.

    The duo showed that a known issue was occurring by being bad faith actors, contributing both inept and injurious “fixes”, I can envision no worst method of data collection.

    If anything this work is a cautionary tale.

    Reply
  • April 22, 2021

    I hope the University of Minnesota brings wrath to the “researchers”. A formal public apology from them would be a good start, along with a promise never to allow this to happen again. Firing a few “researchers” would also help.

    Reply
  • April 22, 2021

    big chungus

    Reply
  • April 22, 2021

    I think a lawsuit against the University of Minnesota for damages is in order. There need to be consequences for bad behavior.

    Reply
  • April 22, 2021

    I think U. of M. should be the subject of a class-action lawsuit by all users of all open-source platforms they intentionally made less secure. They should be fiscally liable for any breaches that used their intentional vulnerabilities. They should be banned from ALL open-source platforms unless and until they publicly make restitution, fire the researchers involved, and commit to a pre-submission review process.
    And whoever approved the grant for this BS should be fired and never allowed to sit on a grant committee or funding board ever again.

    Reply
  • April 22, 2021

    I hope they fire all members of the ethics board who thought this was alright. Students can screw up but not these people. They should be banned from all activities. And Linux project is bigger than their little university, people run entire companies on linux kernels. if there was any damage done they should be fined for it.

    Reply
  • April 22, 2021

    Reviewed? Terminated. This was an intentional act that could have caused a huge amount of harm. I don’t give a rat’s ass if they had tenure; you just DO NOT DO what they did.

    Reply
  • April 22, 2021

    Agree, heads need to roll. I’m sure the CS department will try to minimize all of this, they can be shown the door also…

    Reply
  • April 24, 2021

    Terrible decision by the professors and ethics commission to green light this. It’s wrong on so many levels.

    However, for a minute, let’s play the devil’s advocate here.
    A popular argument for open-source software is the idea that many eyes can check the code, so there’s more chance of a (security) flaw being caught before it goes out to the world. How do you test if that concept works in real life?
    I suppose you could make an in-house fork of the kernel with university maintainers and test this on them, but I think they would make different decisions because the people submitting fixes/bugs are people they know. Also, you’d have to tell them a pretty good story about how it’s logical that you’re maintaining your own kernel within the university because of… reasons?

    What I’m trying to say is: it’s shocking that +/- 60% of submitted vulnerabilities were included in the affected open-source projects, and I don’t think we would’ve learned that if the researchers didn’t test this in the real world.
    Was it unethical? Heck yes. Do kernel maintainers have a valid reason to be pissed off? Absolutely. But do I think there was another way to reach this conclusion? I don’t know. I’m not a researcher.

    Reply

Leave a Reply

Recent Comments

Monthly Newsletter


.