Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Software

Imparting Malware Resistance With a Randomizing Compiler 125

First time accepted submitter wheelbarrio (1784594) writes with this news from the Economist: "Inspired by the natural resistance offered to pathogens by genetically diverse host populations, Dr Michael Franz at UCI suggests that common software be similarly hardened against attack by generating a unique executable for each install. It sounds like a cute idea, although the article doesn't provide examples of what kinds of diversity are possible whilst maintaining the program logic, nor what kind of attacks would be prevented with this approach." This might reduce the value of MD5 sums, though.
This discussion has been archived. No new comments can be posted.

Imparting Malware Resistance With a Randomizing Compiler

Comments Filter:
  • Cute but dumb (Score:5, Insightful)

    by oldhack ( 1037484 ) on Thursday May 29, 2014 @05:38PM (#47123743)
    You think you have buggy software now, this idea will multiply a single bug into a dozen.
    • Re:Cute but dumb (Score:5, Insightful)

      by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Thursday May 29, 2014 @05:56PM (#47123897) Homepage Journal
      If bugs are detected earlier, they can be fixed earlier. Randomizing can turn a latent bug into an incredibly obvious bug [orain.org].
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      And would make that buggy software nearly impossible to patch.
      Every time there's a security vulnerability found, you'd essentially have to reinstall the whole application.

      Knock on wood, but I've not had enough bad experiences with malware to think the tradeoff is worth it.

      • by Jeremi ( 14640 )

        And would make that buggy software nearly impossible to patch. Every time there's a security vulnerability found, you'd essentially have to reinstall the whole application.

        Is there any way to run the patch through the same process (using the same per-install key, of course) so that the result is a locally-transmuted patch that can be applied to the locally-transmuted application?

        (Not that updating the entire application is necessarily a deal-breaker anyway; we all have broadband now, right?)

        • If that were possible, then malware could do the same thing (because we all know the random seed isn't going to be stored securely by average users).

      • by stooo ( 2202012 )

        >> And would make that buggy software nearly impossible to patch.

        A patch applies to the source, recompile, and there you are.

        >> Every time there's a security vulnerability found, you'd essentially have to reinstall the whole application.

        No, you have to patch the source and recompile the exe. It's a much saner workflow than to patch a binary (who does this anyway?).

    • I'd be more worried about it turning non-issues into bugs, the cases when programmers think: "ah that can never happen" or "the program would've crashed/thrown-an-exception before getting here..." and in 1 in 1000 installs that cases has some weird behavior. Personally I prefer less intrusive, honey pot based approaches Bitcoin Vigil [bitcoinvigil.com] It's not perfect, but at least it doesn't have side effects, or false-positives.
      • by stooo ( 2202012 )

        >> and in 1 in 1000 installs that cases has some weird behavior.

        Get the compiler rand seed with the bug report.
        Reproduce the compilation and the test the bug.
        Profit.

        This could help to force coders to write tidier code.

    • by Anonymous Coward

      Not really, this is a simple to do. We already do it to a minor degree. Every time we make a change and recompile the order gets shifted a little. Because most (nearly all) modern programs are modular. (meaning they are segmented often methods or functions that can be rearranged in any order, without changing the programs logic or flow.) All we need to do is reorder the program. It would even be possible to encrypt or sign parts or the whole of a program. This would make more of a challenge for hackers. (bo

    • by gweihir ( 88907 )

      And make Heisenbugs the norm: Just compile, and you bug may vanish, multiply or behave completely different. Not smart at all...

  • by cant_get_a_good_nick ( 172131 ) on Thursday May 29, 2014 @05:39PM (#47123753)

    Can you imagine parsing a stack trace or equivalent from one of these? Each stack is different.

    Ignoring the fact that Heisenbugs would be much more prevalent.

    Part of programming is paring of states. The computer is an (effectively) infinite-state machine. When you add bounds and checks you're reducing the number of states. This would add a great deal, making bugs more prevalent. Since a lot of attacks are based on bugs, this may increase the likelihood of some attacks.

    • by Anonymous Coward on Thursday May 29, 2014 @05:52PM (#47123859)

      Ahh, but don't forget the benefits! If random bugs could appear or disappear on installs, think of how much tech support time you can save by just saying "Re-install it and you'll be fine."

      Half the time that's what they do now anyways, now you can replace ALL the calls with that!

    • by Anonymous Coward on Thursday May 29, 2014 @05:58PM (#47123921)

      The randomizing compiler could easily be designed to base it's randomizations on a seed, and then include that seed in the obj headers and stack dump trace library of the libc it links against. Then the bug would be just as reproducable as with a standard compiler.

      • This is the case for the multicompiler. It uses the -frandom-seed argument that is already used by gcc and clang to seed various other nondeterministic processes. This sentence in the summary annoyed me a lot:

        although the article doesn't provide examples of what kinds of diversity are possible whilst maintaining the program logic, nor what kind of attacks would be prevented with this approach."

        I don't know if TFA actually didn't, but the UCI group has published some papers on the multicompiler work, including this one from CGO last year [uci.edu]. The main goal for this is to provide defence against return-oriented programming (ROP) [wikipedia.org] attacks, where you chain together 'gadgets' (small chunks of code

    • by epine ( 68316 ) on Thursday May 29, 2014 @05:59PM (#47123925)

      I must respectfully disagree with you on every point you raise.

      A randomised stack would cause certain types of bugs to manifest themselves much earlier in the development process. Nothing decreases the cost of a bug hunt more than proximity to the actual coding event.

      Such an environment rewards programmers who invest more to validate their loops and bounds more rigorously in the first place. Nothing reduces the cost of a bug more than not coding it in the first place.

      There's nothing that stops the debugging team from debugging against a canonical build, if they wish to do so. If they have a bug that the canonical build won't manifest, they wouldn't even have known about the bug without this technique added to the repertoire. If many such bugs become known early in the development process—bugs that manifest on some randomised builds, but not on the canonical debug build—you're got an excellent warning klaxon telling you what you need to know—your coding or management standards suck. Debugging suck, if instigated soon enough to matter, returns 100x ROI as compared to debugging code.

      Certainly the number of critical vulnerabilities that exist against some compiled binary can only increase in number. So what? The attacker most likely doesn't know in advance which version any particular target will run. The attacker must now develop ten or one hundred exploits where previously one sufficed (or one exploit twice as large and ten times more clever).

      If the program code mutated on every execution, you would have some valid points. That would be stupid beyond all comprehension. An attacker could just keep running your program until it comes up cherries.

      The developer controls the determinism model. It's an asset in the war. There can be more when it helps our own cause, and less when it assists our adversaries.

      Determinism should be not reduced to a crutch for failing to code correctly in the first place. Get over it. Learn how. Live in an environment that punishes mistakes early and often.

      • by Zeek40 ( 1017978 ) on Thursday May 29, 2014 @06:50PM (#47124365)
        You respectfully disagree with his points without actually providing any reason why, and while nick's post makes complete sense, your statements seem to have a ton of unexplained assumptions built in.
        1. What kinds of bugs do you think would manifest earlier using this technique, and why do you think that earlier manifestation of that class of bugs will outweigh the tremendous burden of chasing down all the heisenbugs that only occur on some small percentage of randomized builds?
        2. How does such an environment reward programmers who invest more time in validation? More time spent in validation will result in better code regardless of whether you're using a randomized or non-randomized build. More time spent in validation is a cost you're paying, not some free thing provided by the randomized build process.
        3. I don't know what this sentence means: "Debugging suck, if instigated soon enough to matter, returns 100x ROI as compared to debugging code." If what instigated soon enough?
        4. "Determinism should not be reduced to a crutch for failing to code correctly" - What does this even mean? An algorithm is either deterministic or non-deterministic. If your build system is changing a deterministic algorithm into a non-deterministic algorithm, your build system is broken. If your algorithm was non-deterministic to begin with, a randomized build is not going to make it any easier to track down why the algorithm is not behaving as desired.

        All in all, your post reads like a smug "Code better, noob!" while completely ignoring the tremendous extra costs that are going to be necessary to properly test hundreds of thousands of randomized builds for consistency.

        • by perpenso ( 1613749 ) on Thursday May 29, 2014 @09:58PM (#47125791)

          What kinds of bugs do you think would manifest earlier using this technique ...

          The GP mentioned a randomized stack. An uninitialized variable would be one, something that often accidentally has a value that does no harm (a zero possibly).

          ... and why do you think that earlier manifestation of that class of bugs will outweigh the tremendous burden of chasing down all the heisenbugs that only occur on some small percentage of randomized builds?

          You do realize that your argument for the status quo and not dealing with the "heisenbugs" is essentially arguing to leave a coding bug in place? Recompiling will not necessarily introduce new bugs, rather change the behavior of existing bugs.

          I've seen many of the sort of bugs this recompiling technique may expose, I spent some years porting software between different architectures. Not only did we have different compilers but we had different target CPUs. It was a friggin awesome environment for exposing unnoticed bugs. Software that had run reliably under internal testing for weeks on its original platform failed immediately when run on a second platform. And it kept failing immediately after several crashing bugs were fixed. The original developers, who were actually quite skilled, looked at several of the bugs eventually found and wondered how the program ever ran at all. I've seen this repeated on multiple teams at multiple companies over the years.

          Also developers working on one platform eventually learned to visit a colleague working on the "other" platform when they had a bug that was hard to reproduce. There was a good chance that a hard to manifest bug on one platform would be easier to reproduce on the other.

          There is nothing like cross platform development to help shake out bugs.

          This recompilation idea would seem to offer some of these same benefits. Yes it complicates reproducibility of crashes in the field but if one can get a recompilation seed with that crash dump/log its more like of dealing with an extra step not some impossible hurdle.

          Plus recompiling with a different seed each time the developer does a test run at their workstation could help find bugs in the first place, reducing the occurrences of these pesky crashes in the field.

          I'm not saying these proposed recompilations in the field are definitely a good idea, just that the negatives seem to be exaggerated. It looks like something interesting, worth looking into a bit more.

          • by Zeek40 ( 1017978 )
            An uninitialized variable can be caught with a style-checker. There's no need to resort to something like randomized binaries to solve a problem like that. I'm not arguing in favor of leaving bugs in place, I'm arguing in favor of choosing a specific set of binaries to focus your testing efforts on. The bottom line is that testing resources are finite and one of the key steps to fixing a bug is identifying a method of repeatably demonstrating that bug. Having randomized binaries severely complicates tha
            • I mean how the costs don't outweight the benefits. Dammit, I always proof-read what i think I wrote, not what I actually wrote.
              • I mean how the costs don't outweight the benefits. Dammit, I always proof-read what i think I wrote, not what I actually wrote.

                Me too. That is when I bother to proofread. :-)

            • I don't see the problem. You have repeatability if the qa/remote crash report includes the randomization seed used for the remote binary. That binary and debugger info gets recreated when you recompile with the seed. It seems a minor inconvenience, although it would be disturbing to see the assembly change every debug session if one is going to that level.
    • by PRMan ( 959735 ) on Thursday May 29, 2014 @06:11PM (#47124041)
      I once thought about writing a virus as an academic exercise (I have never actually written a virus). This was how I was going to evade signature detection. If my virus put random numbers of NOOPs in the code when it rewrote itself and moved the jumps accordingly, it would be very difficult to make a signature for.
      • Ah, the perceptive reader notes that two can play this game. :)
      • by xvan ( 2935999 )
        Not whit NOOPs, It's been already used and marks your code as "suspicious". But the same obfuscation techniques used for anit-piracy, can be (and are) used by virus makers.
    • by Nyder ( 754090 )

      Can you imagine parsing a stack trace or equivalent from one of these? Each stack is different.

      Ignoring the fact that Heisenbugs would be much more prevalent.

      Part of programming is paring of states. The computer is an (effectively) infinite-state machine. When you add bounds and checks you're reducing the number of states. This would add a great deal, making bugs more prevalent. Since a lot of attacks are based on bugs, this may increase the likelihood of some attacks.

      I don't know about you, but with the limited programming I have, I'd save this new compiler for release version and use a normal compiler for internal version, so I can debug and make sure it's working great. Then I'd use the new compiler for the .exe I'm going to produce and give to people (sell/whatever).

      Hopefully by then most the major bugs are found. If not, I can compile the source code on a normal compiler and do normal debugging.

      Swear to Gog no one uses their brains anymore.

  • ....why? (Score:5, Insightful)

    by Anonymous Coward on Thursday May 29, 2014 @05:47PM (#47123821)

    ..would a professor of CompSci think this is a good idea, despite the hundreds of problems it *causes* with existing practices and procedures?

    Oh, wait.. maybe because the idea is patented and he'll get paid a lot.
    http://www.google.com/patents/US8239836

    • by perpenso ( 1613749 ) on Thursday May 29, 2014 @09:32PM (#47125649)

      ..would a professor of CompSci think this is a good idea, despite the hundreds of problems it *causes* with existing practices and procedures? Oh, wait.. maybe because the idea is patented and he'll get paid a lot.
      http://www.google.com/patents/... [google.com]

      As an employee of the University of California a professor is *required* to report any discovery or method that *might* be patentable to the University.

      The University takes it from there, it has an office that researches viability, handles the process and then licenses the patents to "industry". With respect to licensing small local companies are given a better deal than larger internationals. As for the licensing fees collected, 50% goes to the University, 25% to the department (UC Irvine's Computer Science department in this case) and 25% to the employee(s).

      At least that is how it was a few years ago when I was a grad student at UC.

  • by NotInHere ( 3654617 ) on Thursday May 29, 2014 @05:47PM (#47123827)

    So we should use something like ABS with that randomisation enabled? Or should we trust to download distinct blobs for every download? For the latter, nice try NSA, but I don't want you to be abled to incorporate spyware into my download and not be noticed.
    Its already a pity software gets signed only by so few entities (usually one at a time, at least for deb). Perhaps I know that the blob came from Debian, but I can't verify whether it is the version the public gets, or the special version with some ... extra features. The blobs should be signed by more entities, so then all would have to be NSLed.

    • So we should use something like ABS with that randomisation enabled? Or should we trust to download distinct blobs for every download? For the latter, nice try NSA, but I don't want you to be abled to incorporate spyware into my download and not be noticed.
      Its already a pity software gets signed only by so few entities (usually one at a time, at least for deb). Perhaps I know that the blob came from Debian, but I can't verify whether it is the version the public gets, or the special version with some ... extra features. The blobs should be signed by more entities, so then all would have to be NSLed.

      I wouldn't trust it either way. A randomized binary from some site would be insanely dangerous. But even a randomized binary that you compiled yourself is questionable. Who's to say your compiler isn't compromised? Without being able to compare binaries against other peoples with identical checksums you've now turned the effort to verify a file from a global one to just you. You're far more at risk.

      • But even a randomized binary that you compiled yourself is questionable. Who's to say your compiler isn't compromised? Without being able to compare binaries against other peoples with identical checksums you've now turned the effort to verify a file from a global one to just you. You're far more at risk.

        Do you mean Trusting trust? You don't have to also randomize the compiler. Instead of the resulting programs, you can compare the compiler binaries, and check whether they are globally the same. There is only a small loss in security as you would need to globally ensure the compiler works right.

  • Copying the Bad Guys (Score:3, Informative)

    by alphaminus ( 1809974 ) on Thursday May 29, 2014 @05:48PM (#47123837)
    Some malware already does this, which definitely helps it evade heuristic scans. Sounds worth exploring, but i bet it will make the AV they force me to run at work that much more frustratingly restrictive.
  • by generating a unique executable for each install

    ... and cloning a unique customer support team for each install!

  • Gentoo (Score:5, Funny)

    by Bert64 ( 520050 ) <bert@[ ]shdot.fi ... m ['sla' in gap]> on Thursday May 29, 2014 @05:52PM (#47123861) Homepage

    You can already do this with Gentoo, you're highly unlikely to use the same combination of compiler, kernel, assembler, libraries, use flags, compiler flags etc as anyone else...

  • This technique would probably be more effective for making detection resistant malware than protecting against malware. The software would still function almost the same, so if it is still interacted with in the same manner, it could still be vulnerable to the same exploit. It also makes it much more difficult to verify the software is valid, meaning that it actually INCREASES the risk factor for malware on account of being a perfect recipe for trojans.

    The real solution to the problem he is trying to solve is not having a monoculture. This does nothing to solve it. If you have different code bases for operating systems, browsers, etc., the ability to infect all of them may be hampered. That's the same advantage of humans and dogs and snakes not being susceptible to the same pathogens. His form of diversty is more of an environmental one, so it's like different potatoes in a bag looking different despite the fact that they are almost certainly clones of each other. That does nothing against a blight.
    • It blocks ROP. So it is an effective way of preventing an primary attack vector.
      It's not a defense against resident malware.

      Trojans are already doing live randomization. But ROP attacks like predictable software so the attack can be developed offline.

      • Since the details of the technique are not all that clear, it's hard to say what it would and wouldn't protect against. If the behavior of the software is less predictable beyond the level of compiling it yourself, the economic damage of new bugs cropping up would be greater than the current economic damage of malware.

        You are missing why it's a boon to trojans. I can confirm that my software is legit by using a hash. If it doesn't match the hash, I know it's likely a trojan.
        • I saw Prof, Franz give his talk last year and got a few minutes to pick his brain on this. The details were quite clear. Given the audience he wasn't holding back on details. The delivered software is unchanged. You can randomize at install time or (maybe) at load time. So your hashes are fine. You local file integrity is a local problem.

          The shortcoming that I see is shared libraries. Shared libraries are evil from a security context and in the current invocation they don't get randomized (because they are

          • If it's randomized at load time, how would it be advantageous over ASLR?

            Shared libraries are evil from a security context

            I've heard that said on multiple occasions, but I haven't seen much to back it up. I suspect that even if there are theoretical advantages, in practice, it's worse security. Out of date software remains one of. if not the biggest source of vulnerabilities. If multiple instances of the same library need to be updated, the likelihood that at least one of them will go unupdated is a great

  • Trusting trust (Score:4, Informative)

    by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Thursday May 29, 2014 @05:55PM (#47123873) Homepage Journal

    The problem with any nondeterministic compiler is that it prevents use of diverse double-compiling [dwheeler.com], a method to detect the sort of compiler backdoor described by Ken Thompson in "Reflections on Trusting Trust" [bell-labs.com]. You'd have to bootstrap the compiler with nondeterminism turned off (and with GUIDs, timestamps, and multithreaded allocation of symbols for anonymous objects turned off too) in order for the DDC bootstrap construction to converge.

    In any case, I've implemented a technique like this on the Nintendo Entertainment System. I wrote a preprocessor that shuffles the order of functions in the file, the order of opcodes within a function that don't depend on each other's results, and the order of global variables (or the order of fields in an object). One reason I implemented it was to use one variable as another's canary [wikipedia.org] to make buffer overflows easier to detect in an assembly language program. The other is watermarking the binary [nesdev.com] so that I can tell who leaked a particular copy of the beta version to the public. If you're interested, you can find my shuffle tool in the source code of Concentration Room [pineight.com].

  • by medv4380 ( 1604309 ) on Thursday May 29, 2014 @05:55PM (#47123885)
    It would probably cause more problems than it's worth, but it might be able to render some form of cheating worthless. If each program had a different layout then knowing what address you needed to hook into to cheat could be a problem. I don't see how it could cause more problems than anti-cheat software already does.
  • by MathFox ( 686808 ) on Thursday May 29, 2014 @05:56PM (#47123893)
    If you think a bit further... An operating system could load an executable at a different address [wikipedia.org] every time it is used, without recompilation!
    • If you think a bit further... An operating system could load an executable at a different address [wikipedia.org] every time it is used, without recompilation!

      The problem with ASLR is that it involves Position Independent Code [wikipedia.org]. The absolute addresses may change, but functions are called by their relative addresses to each other. When you know were one function is you know were all the others are as well. A mild example of this new randomization technique is to randomize the file order being fed into the linker. Different file order means different function layout. Then even if you know where one function is you don't know where all the others are without loo

  • by jones_supa ( 887896 ) on Thursday May 29, 2014 @05:57PM (#47123899)
    Okay, this technology is described in depth in a 2013 paper called librando: Transparent Code Randomization for Just-in-Time Compilers [uci.edu]. There might be even newer information available somewhere, if Mr. Franz or his colleagues have continued the research.
  • I swapped all the data bits around on my motherboard!

    Hahaha!

    Good luck!

    Oh wait...

  • by hamster_nz ( 656572 ) on Thursday May 29, 2014 @06:05PM (#47123993)

    Why bother with this at the compiler level?

    Just find 10,000 instruciton pairs that can be reordered as they have no interdependancies, and reorder each of the pairs at random during the install phase. That gives you 2^10,000 unique executibles, but all the debugging symbols and so on will remain the same.

    I guess that doesn't help you against stack-smashing and so on. But will allow you to fingerprint who leaked your binary onto bittorrent - which would be its eventual use.

    • That's a nice idea, but it won't work everywhere.

      In x86, for instance, the majority of instructions affect global flag registers. You can have two instructions that operate on entirely different memory locations and GP registers, but when you swap them the flags will end up set differently.

      You'll find very few instruction pairs that you can do this to without some ability to perform local analysis of the code.

      • It isn't that hard.... there are plenty of low hanging fruit - the classic easy case is the NOPs that are used to align jump destinations. Just find :

            [NON PC RELATIVE INSTRUCTION]
            NOP
            NOP
        and replace it with

            NOP
            [NON PC RELATIVE INSTRUCTION]
            NOP

        You could even patch the PC relative offset if you wanted to...

  • by vux984 ( 928602 ) on Thursday May 29, 2014 @06:07PM (#47124007)

    The problem with this in "Explain like I'm Five" terms:

    You can have no idea what the program you are running does.

    You cannot trust it. You cannot know it hasn't been tampered with. You cannot know a given copy works the same as another copy. You cannot know your executable has no back doors.

    On the security minded front we have a trend towards striving for deterministic build capability; so that we have some confidence and method of validating that a source code to executable transformation hasn't been tampered with, that the binaries you just downloaded were actually generated from the source code in a verifiable way.

    Another technique I'm seeing in secure conscious areas is executable whitelisting, where IT hashes and whitelists executables, and stuff not on the whitelist is flagged and/or rejected.

    Now this guy comes along and runs headlong in the other direction suggesting every executable should be different. And I'm not sure I see any real benefit, nevermind a benefit that offsets the losses outlined above.

    • by crow ( 16139 )

      It's simple. You use signed source code instead of signed binaries.

      Then you use a compiler and linker that does some simple things like randomly ordering variables and functions in the executable and on the stack. That makes it impossible for an attacker to know where some key variable is and exploit it though an overflow (whether on the stack or elsewhere). The attacker is far more likely to crash your program than to exploit a bug, which is much easier to recover from.

      Also, as pointed out elsewhere, wh

      • by vux984 ( 928602 ) on Thursday May 29, 2014 @06:58PM (#47124437)

        It's simple. You use signed source code instead of signed binaries.

        That doesn't really help.

        If every executable is different, then I have no information about the binaries i downloaded. I have to download the source, verify that its the 'audited trusted source' by checking its hash and signatures, and then I have to compile it myself. Most people don't want to compile all their own code.

        It is good enough that OpenBSD released the source code, trusted auditing group audited the source code, and trusted build validation group verifies that the binaries on the OpenBSD site were generated from the audited source. I can just download the binaries check the hash/signatures and I'm good to go.

        And in the case of a corporate IT department, you use the randomizing compiler to build the binary that you push out to your clients. It may be the same throughout your company, but it will be different from anything anyone outside would have access to, which is probably good enough.

        The technique can be expanded to the home market; whereby joe-sixpack is running executable whitelist-reputation subscription software that will flag anything on his system that isn't "known good". Antivirus software is starting to head in this direction -- where it maintains databases of 'known good' executables; you've probably even seen them say "this executable is not known... submit it for analysis" -- take that system to its logical conclusion; and we could see community sites maintain executable whitelists that are as effective as adware blockers. (And they'd have no qualms about flagging "technically not illegal malware but nobody actually wants to run this shit" (e.g. toolbar search redirections through popup advertisting portals that the AV guys are currently too scared to just block outright.)

        Community managed executable whitelists with operating system level enforcement support could potentially make a serious dent in malware on the average uninformed users computer. It would help close a lot of attack vectors. More effective I think than 'randomizing' variable layout at in the compiled executable.

        Also re:
        Then you use a compiler and linker that does some simple things like randomly ordering variables and functions in the executable and on the stack.

        Stronger ASLR and DEP type features in the OS to do executable layout randomization at runtime I think represents a better approach to this than randomization at compile time.

  • I can't see how Franz's idea is materially different from "Randomized instruction set emulation" by Barrantes, Ackley, Forrest, and Stefanovic (2005).
  • by Anonymous Coward

    I worked in this field a good many years ago, and I remember how we hoped that new Windows environments would suppress the prevalence of viral executables.

    Then Macro Viruses turned up.

    Now, Macro Viruses work at a higher level than machine code. They will therefore work on ANY machine that recognises, for instance, the WORD macro language - a mainframe, if WORD was ported to it. And you can't change macro languages - they are standardised.

    I've seen many academics propose the 'answer' to viruses, and watched

  • by Anonymous Coward

    The anti-virus product makers are really going to hate this.

  • This is only an issue because of unchecked pointer arithmetic. For garbage collected and range checked items, you can't take advantage of co-location of data. In a JVM, if you try to cast an address to a reference to a Foo, it will throw an exception at the VM level. Indexing arrays? Push index and array on the stack, and it throws an exception if index isn't in range when it gets an instruction to index it. In these cases, pointer arithmetic isn't used. In some contexts, you MUST use pointer arithmet
  • . . . it's a giant step backwards. I used to be a total advocate of monolithic kernels and all executable code built locally from source, but the current method using package management (yum, apt, etc.) has been incredibly beneficial - both to administrators such as myself and for support personnel. It eliminates a whole raft of questions (what compiler was used? what switches/options were in effect? what defaults were configured?) and allows exactly what this would eliminate - the reasonable expectati
  • This is what polymorphic software does, and I think you'll find it on pretty much every computer that's part of a botnet.

    By this measure, botnet software should be really difficult to detect and compromise -- and yet it isn't.

    Also, it's worth noting that while government-sponsored and targeted attacks would be more difficult using this method, most malware depends on whatever the current security flaws are and/or human failure to initially get its foot in the door.

    And the logic path wouldn't be changing, ev

  • Viruses in nature mutate randomly. Computer viruses don't.
    Computer virus designers are intelligent, hostile, and evil in intent.
    If there's a way around it, they'll find it and it's game over.

    Besides, many if not most attack vectors wouldn't care a whit - tricking a user into executing code would still work, SQL injection, cross site scripting...

  • This seems to me the wrong level for software diversity, too low. A bug in the source will be executed in all variants (think sql injection), while an exploit that depends on particular bytes in particular locations can already be made difficult by ASLR.

    What about having higher level protocols that the software of a given category must adhere to, and various programs that treat data according to those protocols? You know, like that internet thing before the prevalence of web2.0 megasites, or like posix. The

  • They use lists of known file hashes to search for files unique to your computer. If this were done they would have to examine every file.
  • "Inspired by the natural resistance offered to pathogens by genetically diverse host populations, Dr Michael Franz at UCI suggests that common software be similarly hardened against attack by generating a unique executable for each install."

    What a good idea, isn't this what they did with the Space Shuttle ..
  • "Microcode is a layer of hardware-level instructions or data structures involved in the implementation of higher level machine code instructions in central processing units" ref [wikipedia.org].
  • As a professional software tester let me be the first to say noooooooooooo !

  • This doesn't fix the problem. It makes the chances of exploitation a bit smaller, on a "per-try" basis.

    Back in the old days, some daemons or setuid programs would do insecure things with /tmp. So the hacker would make a program:
    target = "/tmp/somefile";
    while (1) {
    unlink (target);
    link ("/etc/passwd", target);
    unlink (target);
    link ("/tmp/myfile", target);
    }
    The daemon would check access permissions of the "target", hopefully

  • huh. this sounds very similar to the theoretical virus designs i came up with many years ago. yes, you heard right: turn it round. instead of the programs on the computer being randomised so that they are resistant to malware attacks, randomise the *malware* so that it is resistant to *anti-virus* detection. the model is basically the flu or common cold virus.

    here's where it gets interesting: comparing the use of randomisation in malware vs randomisation in defense against malware, it's probably going t

    • by Anonymous Coward

      That's not theoretical at all. You're over 20 years late to the party. It's called polymorphism or metamorphism (depending on whether it changes individual instructions for similar ones, or actually self-modifies its code).

      The idea was first predicted by the computer scientist Fred Cohen. The Slovenian VXer Lucky Lady demonstrated it in 1988 on the Atari ST, and around about the same time, Mark Washburn with V2PX/1260 on the PC, a Vienna modification; more practically, the first widely released version of s

  • "IT Department. Have you tried randomizing your compiler?"

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...