Operating Systems of the Future 436
An anonymous reader writes: "'Imagine computers in a group providing disk storage for their users, transparently swapping files and optimizing their collective performance, all with no central administration.' Computerworld is predicting that over the next 10 years, operating systems will become highly distributed and 'self-healing,' and they'll collaborate with applications, making application programmers' jobs easier."
Beowulf cluster (Score:2, Funny)
Oh, wait...
Crazy users and VBS scripts (Score:3, Funny)
I image great horrors as the whole cluster goes down in a mass emailing.
/satterth
Amoeba (Score:5, Informative)
**** AMOEBA IS OBSOLETE ***** (Score:2)
:-)
Futurists are stupid (Score:4, Insightful)
Re:Futurists are stupid (Score:3, Insightful)
Apparently, the public has a certain tolerance to defects and bugs. A fine exmple is the automobile, with its near-certain breakdowns, despite Tucker proving otherwise [protsman-antiques.com].
Re:Futurists are stupid (Score:2)
Cell phones rarely crash (granted, much simpler in terms of the complexity of their input), but I think this is because, since there is no focus in marketing about their 'stability', makers really do have to make them stable. As long as 'stability' is a marketable selling point, computers will have to be unstable.
Re:Futurists are stupid (Score:2)
Of course, that's changing, what with web servers and such.
On the other hand, are you willing to pay the price premium for a Unix desktop PC? Ala Apple, OS X, Darwin, BSD, etc?
Re:Futurists are stupid (Score:2)
More and more with each passing day. Looks like I'll be coming home to OSX in the near future.
Re:Futurists are stupid (Score:2)
Re:Futurists are stupid (Score:2, Redundant)
It's called a compiler. You use C/C++, or whatever, to 'tell' the computer what the program it should make will do.
Computers that can 'program themselves' is simply an extention of that concept to the point where (presumably) you can 'code' in your natural spoken language. A computer shouldn't do anything until you've told it what to do. Currently, we use C, but there really isn't a functional difference between English and C except for the granularity of the specification of the problem and the desired implentation of its solution. For instance, with PHP, I no longer need to tell the computer that the $foobar variable will be an unsigned long
Re:Futurists are stupid (Score:2, Informative)
Really? Please tell me how to break down:
"Why are we here?", or "I think I love her", or "He died last week"
into a sufficient granularity to be implemented in C, of course with the full semantic connotations involved. There's a huge difference between a formally defined language and a natural language. That's why NLP is so damn hard.
As far as computers programming themselves, well... a c/c++ compiler translating c/c++ code into machine code isn't the same thing. Translation *is* a necessary step, but you also have to add the ability to change the running program. For that you need a language that blurs the distinction between data and instructions.
Re:Futurists are stupid (Score:5, Insightful)
My point was that instructions are data. But I challenge you to illustrate that in order to solve a problem, you can provide data that does not encompas the intrucstions. "My house is on fire" is data that will instruct people to run out of it, but only because they were previously programmed with a 'fire' trigger. Escape it when it's inputted into your system.
So neither english nor C can go outside of it's own contextural setting. English is just so more complicated with so many more possible branches of execution based on data that it's difficult to compare the two without either belittling humanity or getting 1984ish about technology. C
"Why are we here?" has multiple answers, so you can really only validate successful self-programming if you already think you know what the answer is. And for that, you depend on previous data entry
Re:Futurists are stupid (Score:2, Informative)
I already can't find a job (Score:3, Funny)
Re:Futurists are stupid (Score:3, Insightful)
Re:Futurists are stupid (Score:2)
"In the next 10 years, humans will be able to make sensible decisions that do not give them excuses and scape-goats to feel unhappy about their experiences in this society."
Honestly, I think there is an entrenchment in the 'bitterness' and 'stress' social industry that we're lenient to give up. The day computers actually start working, we'd have to start focusing on our own problems again - the very antithesis of the desires of a market.
Re:Futurists are stupid (Score:2, Informative)
Re:Futurists are stupid (Score:2, Funny)
Re:Futurists are stupid (Score:3, Insightful)
One of the key laws of nature is : Shit Happens.
This is as true for code in your PC as it is for crawlies in nature.
We want to fool ourselves that the PC is a clean and closed environment which we have full control of but it just isn't true. That storage device that was there a picosecond ago may have just failed or been removed, the network connection may have just been severed, another program may be running amok and draining system resources just as another needs it.
Nature mostly gets around unexpected problems, we need OS's and languages that can do the same.
Your goal of OS's that don't crash and hardware that doesn't "lock up" arn't incompatible with that.
Re:Futurists are stupid (Score:2)
> Nature mostly gets around unexpected problems
The dinos would agree with 'mostly'. I want mostly. I want computers that are built to work regardless of input, unless said input is likely to occurr on a frequency of say, once every decade or some crap.
Companies are notorious for turning this around. Witness warrentees. "This product will work unless you do X" Sometimes X is why people buy it in the first place!
In the realm of computer and hardware, there is nothing to say that we can't make the PCI bus X times slower in order to build complete down-to-electron-level fault tolerance into it. Obviously, I'm unaware of the actual feasibility of this, but I think people above, in blaming the market, were far more on point than saying, "Well it happens in nature, so it happens in PCs." Sure, but I didn't see species dropping off the face of the earth like flies until the 1970s, when we starting making impossible-to-fulfill demands of our eco system.
Same of computers. The vision, the story, the 'sales pitch' is really lightyears ahead of the design. It could only happen in an economy who's goal is to get shit out as fast and cheaply as possible to everyone, instead of considering the social and unquanitiable costs of certain technologies. Until manufacturers are really allowed to say, "We made it X times slower, but you can't crash it short of excersising your physical superiority on it, so I dare you to even try to feel stress or mistreatment in using it", and I think that might be never under current circumstances, posters above were more on point than you were.
Which isn't to say that I don't agree
Re:Futurists are stupid (Score:2, Insightful)
I'm so sick and tired of what the next 10 years will bring us.
Right. I think the point is, though, to quote from the article:
The target environment for Farsite is an organization in 2006 with 100,000 computers, 10 billion files and 10 petabytes (10,000TB) of data.
Managing data and applications on that scale with PCs today sucks. Data synchronization is a HUGE issue already. The question futurists ask is what must we change for that to be manageable?
Re:Futurists are stupid (Score:2, Interesting)
That's just in general. Apply it to the technology sector, and it becomes even more true. About the best you can do is say "wouldn't it be cool if...?" But basically these guys just take an interesting research paper (out of the thousands out there) and act like that's what's actually going to happen.
But I'm better than them! I really can predict the future! I predict that in 10 years, there'll be a bunch of people predicting what will happen 10 years from then, and nearly all of them will end up being wrong. That's right, you heard it here first.
Gyrocopters, Rocketcars, Automats... (Score:2)
Instead what we end up with a distopia that looks more like "Blade Runner" and less like "The Jetsons".
Re:Gyrocopters, Rocketcars, Automats... (Score:2)
It's in the laws of thermodynamics, but we have to ignore it because we all depend on it to offload those problems (and sometimes the origional problem if the technology 'transports' the original problem rather than solves it) to other parts of the world.
Re:Futurists are stupid (Score:2)
Original poster complained about stability.
Someone else mentioned Unix.
This poster mentioned OS X, as Unix on commodity hardware.
You reject it; fine, stick with Linux or Windows
There aren't exactly many competitors for a desktop Unix boxen, are there?
A vision of OS future : tiny reliable components. (Score:5, Interesting)
It's definitely a good approach, although ErOS is still quite experimental yet.
Re:A vision of OS future : tiny reliable component (Score:2)
Re:A vision of OS future : tiny reliable component (Score:2)
So while it is certainly a good approach to have very stable base components, it isn't an all-solving approach.
Re:A vision of OS future : tiny reliable component (Score:2)
Beware emergent behaviour (Score:5, Insightful)
Unfortunately that doesn't necessarily make the OS itself reliable. The emergent behaviour of a system is different from the behaviours of its components.
After all, all software is based on multiple tiny extremely reliable components (F00F and FDIV bugs aside)-- the processors op-codes -- and look how flakey most software is.
Sure, you've got to start with reliable components, but you have to combine them in just the right way, too.
Eros's features for keeping EB to a minimum (Score:3, Informative)
in eros everything is orthogonally persistant meaning that every object, without doing anything on its own, has it's state saved by the system.
the other neat feature that makes it more reliable even in the face of bad application level code is that instead of access list based security ala unix, there are fine grained permissions called capabilites that govern what any object may do to any other.
these features coupled with transparent distribution could guarantee that even if the terminal in front of you is struck by lightning you'll be able to move to the nearest working one and pick up *exactly* where you left off!
check it out- there are a lot of kewl os level ideas that could make life better if adopted by more mainstream oses.
Expectation is Key to Reliability (Score:3, Interesting)
First off, we should learn a lesson from biology. The bee, for example, has about a million interconnected neurons. Yet the bee's highly sophisticated behavior is extremely robust and efficient. How does nature do it? The answer has to do with parallelism and expectations.
1. Parallel processing insures that signals are not delayed, i.e., their relative arrival times are guaranteed to be consistent.
2. Expectations are assumptions that neurons make about the relative order of signal arrival times.
We can emulate the robustness of nature by first realizing that computing is really a genus of a species known as signal processing. We can obtain very high reliability by emulating the parallelism of nature and enforcing a program's expectations about the temporal order of messages: no signal/message should arrive before its time. The use of stringent timing constraints will ensure that interactions between multiple tiny modules remains consistently robust. Enforcement should be fully automated and an integral part of the OS.
Of course, this is only part of it. The other constraints (e.g., the use of plug-compatible links, strong typing, etc...) are known already. No message should be sent between objects unless first establishing that plugs are connected to compatible sockets, i.e., that they must be of the same type.
The most problematic aspect of computing, IMO, is that it is currently based on the algorithm. Problem is that algorithms wreak havoc in process timing and the end result is unreliability. The algorithm should not be the basis of computing. To ensure reliability, computing should be based on signal processing. Algorithms should only be part of application design, not process design. Just one man's opinion.
Re:A vision of OS future : tiny reliable component (Score:2)
Whew (Score:2)
Re:Whew (Score:4, Funny)
As if anyone would trust (Score:2)
I doubt this will mean the death of the sys admin...someone still has to orchestrate this thing from some sort of central-type position.
You will be assimilated (Score:4, Funny)
I imagine my Linux boxen surrounded by a couple of stiff-legged, lumbering, wire-encrusted Borg machines, finally proving that resistance is, indeed, futile, as they make my boxen over in their own image.
And Bill's head, with that little shiny snake-like tail, being clamped onto his body as he assumes command.
Linux? (Score:2, Funny)
Obligatory Star Trek Reference (Score:3, Funny)
Until then, this all sounds like cute window dressing built on top of the next NT kernel.
As long as... (Score:5, Informative)
Grumble, grumble...
Re: (Score:2)
Links (Score:2)
http://www.microsoft.com/technet/security/bulle
And a thread talking about it on macintouch:
http://www.macintouch.com/officevx3.html#feb08
A nice conspiracy theoretic rant (Score:2, Interesting)
Your digital "rights" managed TrustedPCs will connect to a giant virtual disk array via the network, where what you store will be subject to government and corporate monitoring and removal.
Think this is nuts? Where are the 200GB drives? Why is Intuit pushing us to store tax and financial information on their site? Why does Microsoft want to give us an authentication token that's good for retrieving our information "anywhere, anytime."
Why would anyone (other than a legitimate large corporation) have a need for local storage, once the Internet storage product is fast and cheap? I can only imagine one use for local storage--copyright infringement.
Re:A nice conspiracy theoretic rant (Score:2)
Here [pricewatch.com].
Why is Intuit pushing us to store tax and financial information on their site? Why does Microsoft want to give us an authentication token that's good for retrieving our information "anywhere, anytime."
For now, they're giving you the option more for your convenience than anything. If you multiboot, or even if you lose your Quicken data in a hard drive crash (this has happened to me before), there will be an offsite backup of it that you can access.
Not to say that it won't turn into something bad, though. As most of us here probably do, I prefer backing up my own data instead of letting the software company do it for me. I am a big proponent of privacy, and I see a definite potential for abuse of these "convenient" features later on. But that doesn't mean they're doing anything bad with it just yet.
Scalability problems, anyone? (Score:4, Insightful)
Surely there will be major scalability problems with something like this, a la Gnutella [slashdot.org]?
The potential pitfalls of 100,000 computers trying to access each other across the same network gives me headaches just thinking about it.
Re:Scalability problems, anyone? (Score:3, Insightful)
Re:Scalability problems, anyone? (Score:2)
Right now gnutella's main problem is that nobody knows this, the network can easily handle the amount of users that use it, only it is competing with the much larger fasttrack network which simply has more to offer.
Re:Scalability problems, anyone? (Score:5, Informative)
That's why it's research. I've met and talked to Bill Bolosky (Farsite project lead); he's very clueful wrt scalability in general, and well aware of the problems that networks like Gnutella (an unusually naive protocol, BTW) have run into. However, like the folks working on OceanStore [berkeley.edu] or CFS [mit.edu] or many other projects, the Farsite folks have a fairly formidable arsenal of innovative techniques they can apply to the problem. The details are still being worked out, of course, because that's what research is all about, but the people working in this area do seem to be making real progress toward solutions that could scale to such levels.
Scary (Score:2, Interesting)
Now if it was open source, distributed OS with self healing I might be ok, I guess I just object to giving that much control to a large coorporation whos main concern is profits and not my privacy.
Re:Scary (Score:2)
When I worked at BigCorp everyone was networked, and you couldn't log on until you'd installed the latest gimcrack they had pushed to your desktop - never mind if it rucked up your other programs. And they would interrupt whatever you were doing to "push" news broadcasts onto your screen every time they made a sale -- at least that was back when they were actually making sales. It seemed kind of Big Brotherish. (Of course, it was their gear.)
I don't even like it when someone comes into my cube and looks over my shoulder, much less sharing all my files.
As far as my own gear goes, I'd rather sit in a cave alone and scratch images into the sand with a sharp stick than be connected to the kind of all-encompassing network you describe.
OS's will be so smart in 10 years..... (Score:5, Funny)
So, Bill is finally going to release a version of windows that will automatically simulate pressing ctrl-alt-delete when it blue screens.
Many people would say it's MS's customers that have been fault tolerant.<rimshot!>
Re:OS's will be so smart in 10 years..... (Score:2, Informative)
Actually, they already invented that with W2k.. if you khappen to be on a coffee break while it crashes and don't pay attention whether you are doing a login or a unlock, then you might be surprised to a fresh desktop just when you thought there was too many apps anyway..
And they're called (Score:3, Funny)
Hmmm... (Score:5, Interesting)
I predict that there will never be a revolutionary new operating system until we break free of the chains imposed by Posix compliance. Until then, we're stuck with files that have to be streams of bytes, ugo-style permissions, non-wandering processes, incompatable RPC calls, &c.
And the real pain is there have been OS'es that have had simple & elegant solutions to problems that are hard under unix (Aegis, Multics, VMS, TOPS, ...) that were pushed aside by the steamroller that is Unix.
But to be fair, many of the forgotten O/S's are now forgotten because they weren't as general purpose as Unix. Unix is the great compromise. But it's hard to strive for the best when you've already accepted compromise.
Re:Hmmm... (Score:2)
OK, you tell the CIO of [mid-sized corp] that he has to junk his $5m worth of Sun boxes because his O/S is a 'compromise'. The enterprise game is a one-shot deal. This isn't "ok, that pc is broken, ship it back to Dell" it's "you spent $500k on a machine that wasn't good enough? go find a new job".
The people that make technology decisions don't care about elegance.
Better get crackin' (Score:2)
But seriously, somehow I don't see this in 10 years.
So what's so special? (Score:4, Insightful)
Will the majority of the computer using populace still be double clicking, dragging and dropping, and 'opening' folders and hard drives 10, 15 years from now?
Could be. Could be.
Re:So what's so special? (Score:2)
Freenet (Score:2, Informative)
Druthers (Score:5, Funny)
Re:Druthers (Score:2)
Re:Druthers (Score:2, Funny)
The future belongs to Plan 9 (Score:5, Insightful)
And don'cha just love it when MS "predicts" that they'll "inovate" by duplicating it under the MS banner?
Anybody care to "predict" the havoc that might insue when such OS's gain wide public use? I'd be leery of using such even in my isolated from the internet home network until it was proven to be absolutely secure, something today's less interactive computer nets can't even manage.
I'm happy that people are looking forward to, and researching, the future.
Would it hurt if a few people spent a bit more time making the present work worth a shit?
KFG
...twixt the cup and the lip (Score:3)
The bad side, which is closer to reality, is that a computer company working in an "extend our existing market" mode will find find it irresistable to tie new things tightly to the innards of what already been deployed. That's a great way to ensure that you inherit security flaws from whatever old model you had, however good the theory of your new system is.
Borg Time... (Score:5, Funny)
1.
2. ComputerWorld story that includes a line about how Microsft sees the computer of the future as one giant logical system with many small partitions.
Is anyone else joining the dots like I am?
Seems like a good idea... (Score:4, Funny)
Amen again Brother! (Score:3, Insightful)
How about getting rid of drive letters in Windows/Dos and having mount points!
How about a better drive interface than the stupid IDE interface. (Macs did it right with SCSI, but now to be "cheap" they do it too [sigh])
And for self healing? If Windows is still around and the predominant OS, I'll pass on the "self healing" - it'll be more like "death-without-dignity." Remember NT 4 SP 6? [Shivver] I don't want MS "self-healing" my machine!
In fact, I don't think I want anyone self healing my machine until software is lots more robust than it is now. At least when I apply patches to my machine and notice that something isn't working right, I know I _just_ patched it, so it might be the patch. With someone else applying patches without my knowing, I would be screwed!
Yeah, all those "wonderful things are just around the corner" articles are neat, but I would truly be happy with some "incremental" changes.
Lets forget "visionary" for a while and just fix the crap that's broken right now! Pleeeeease!
Cheers!
Re:Amen again Brother! (Score:2, Insightful)
Perhaps I am misunderstanding, but you want to get rid of interrupts? Interrupts are a good thing, what we need to do is increase the number of them instead of removing them. If I remember correctly powerpc architecture has 64 hardware interrupts instead of the measly 16 on the x86 platform. We want more interupts not less.
How about getting rid of drive letters in Windows/Dos and having mount points!
I agree with this. While in the short term it would be a pain migrating existing users over. Everyone would have to learn to use
How about a better drive interface than the stupid IDE interface. (Macs did it right with SCSI, but now to be "cheap" they do it too [sigh])
Oh the great IDE Vs SCSI debate. I don't think that Macs support IDE to be "cheap", I think that they do it to be relatively competative/affordable. For some reason unknown to me, SCSI drives are much more expensive than IDE drives. Looking at todays pricewatch listings, I found that the cheapest $ per GB for SCSI was $3.85/GB for a 36.4GB drive. While on the IDE side you could get a 60 GB for $1.37/GB. The cheapest SCSI is over 2.81 times the price of IDE per GB. Never mind that some SCSI drives ran over $10/GB. While I do realize that SCSI is superior to IDE (higher performance, less cpu utilization, more devices per controller), and I would never use SCSI in a server or workstation, is it really worth almost 3 times the price for the desktop? Most desktop uses (browsing internet, email, word processing, solitare) would not even be noticeably improved by the increase in performance. For tasks such as these IDE is more than adequate.
What I would find interesting is a size/performance comparison between a $x SCSI drive and a $x IDE hardware RAID array.
Links provided (Score:3, Informative)
Butler Lampson [microsoft.com], for papers on Byzantine reliability, mostly based on the work of
Leslie Lamport [microsoft.com]
More like Windows or linux of the future (Score:2)
Brrr... (Score:5, Funny)
Windows Inheritance: "Psst. You crouch behind j.user's legs and I'll give him a push."
Clippy 5000: "OK"
*SHOVE*-splat!
Software: "Have a nice trip? See you next Fall! Muahaha!"
The #1 Rule of Network Security (Score:4, Interesting)
Whoever thought up this pipe dream apparently doesn't understand the Zeroth Law of Network Security: If you want information to be secure, DON'T PUT IT ON THE FUCKING NETWORK!
Seriously! As if most business OSes don't default to the least-secure settings already! Why would you want to run important apps on a system where the default is to share anything and everything with any computer in listening distance?
deja vu all over again (Score:2)
If the last 18 years are any indication... (Score:3, Interesting)
I'll believe the distributed file-storage myth when I see it. To me, it sounds as if it would hog bandwidth, just like gnutella does. I don't see any change coming in the way I store files on my computer. It's fast, effecient, and hasn't needed a change.
SysAdmins need not quit their day-jobs. As long as Microsoft is providing this technology, you can be sure that it will run into snags and security vulnerabilities. Increased complexity = increased vulnerability.
...and that's all I've got to say about that
Hmmm... (Score:2)
Hmmm...my first thought..."ScanDisk is checking harddrive C..."
Farsite is a serverless, distributed system that doesn't assume mutual trust among its client computers. Although there's no central server machine, the system as a whole looks to users like a single file server.
Cool...Microsoft invents the cluster. I'm sure the folks who created Beowulf clusters stole the idea from them...come to think of it, those Gnutella folks blatantly ripped them off too...
I'd say something mean, but I assume this was meant as a joke...
Sounds like MS worried about file server future. (Score:2)
It sounds to me like MS is worried about the future of the file server market. Perhaps they see the writing on the wall... it says LINUX. Who's likely to implement linux servers? Those that can't afford to pay for a Win2K Server license. "But wait, if you upgrade to the new Farsite OS, you don't need a server! So you don't need to use Linux at all! Think of the cost savings when you don't need to buy or maintain a separate server! Think of the savings in administration costs!" Or some hype along those lines. With large corporations, with all that spare hard drive space and idle processors, how many servers could they replace? Have they done the math and come up with figures that spell doom for the file server market?
Sounds like Freenet (Score:2)
Maybe it's just me, but the Farsite diagram at the bottom of the article really reminded me of how I understand Freenet to work...Is MS attempting to create a DRM-enabled variation of this same idea?
I don't imagine that Farsite has the same goals as the Freenet project, but there is enough similar in the underlying technology that I was struck by it. Maybe MS is recognizing the value of the architecture, if not some of it's potential uses?
Self healing OS? (Score:2)
That's fine, but... (Score:2)
More than anything else, the user cares about the OS interface. How does it work?
The user doesn't give a damn about where a file is stored. He just wants to launch his programs quickly and locate his files fast. Why can't we do some thinking on this basic issue (and not have the end result be some bulky goofy 3-D environment)?
/dev/files in UNIX (Linux) (Score:2)
Mainframes got very sophisticated in automating this. It was also somewhat difficult to program commands in IBMs or DECs data-definition languages. Much of this was lost in downsizing to personal workstations and is being rediscovered again.
poorly researched article (Score:2, Interesting)
IBM believes that we are at just such a threshold right now in computing. The millions of businesses, billions of humans that compose them, and trillions of devices that they will depend upon all require the services of the I/T industry to keep them running. And it's not just a matter of numbers. It's the complexity of these systems and the way they work together that is creating a shortage of skilled I/T workers to manage all of the systems. It's a problem that's not going away, but will grow exponentially, just as our dependence on technology has.
From my understanding, autonomic computing and other projects like are going for something much bigger than "lets make our OS smarter." I seriously doubt this is targeted at the consumer, since there are too many privacy issues. The real benefit of "self healing" is in the corporate environment where up time is critical. Autonomic's goal as I read it is about making systems work together seamlessly to improve reliability and scalability. Say a server has some hardware problem or a switch is dying. Things like these could cause real financial losses, so having smart systems that reconfigure/heal itself could reduce the cost of hardware and software failures. How many times have admins had to get up at 3 am to fix the webserver because some log ran amuck and ate up all the HD space. Having a standard system for handling these problems would help make systems more reliable.
Too many reporters are getting way too lazy.
Who else thinks that 2006 is undoable? (Score:2)
Plus, as these are fortune 1000 companies, what is the bet that they won't even look at this technology for another 10+ years.
Maybe, just maybe, it will be possible (well, it already is, but...) what is the chance of it being really deployed?
Plus, where are the offsite backups going to be done? Does this mean that every workstation has to be left on at all times. How much retraining does this require. Yes, we know that you used to get fired for leaving your machine on, but if you don't from now on, you will be fired!
Methinks that the dream will not match the reality....
Re:Who else thinks that 2006 is undoable? (Score:2)
There are 80,000 files on my machine. 42,000 are in or under the Windows folder. That is 38,000 non-OS files. (Actually many more than that, because lots of non-OS stuff gets into the Windows folder -- e.g., every internet bookmark is a separate file in Windows\Favorites.) And that's on a 10G hard drive, with less than 7G used!
*Cough* ... *cough* ... from Microsoft??? (Score:2, Redundant)
How long has it taken for Microsoft to make an OS that simply DOES NOT CRASH?!
With around 15 years of work and refinement, they may just about have gotten to that point with Win2000 and WinXP. How much effort did it take them to do long file names, for heaven's sake? Let's not even get into issues about the quality of multitasking.
I simply can't take a prediction seriously that a (real) Borg Operating System will be a reality in 10 years. Especially coming from Microsoft. Heck, I wouldn't believe such a prediction from an OS company I respect. But from Microsoft??? Consider the source.
Sig: What Happened To The Censorware Project (censorware.org) [sethf.com]
My predictions for computers over the next decade (Score:2)
Computers will become easier to use.
And as they get easier to use, the number of people who really understand computers will also decrease.
As less and less people need to understand how a computer ticks in order to use it, the current class of knowledgable computer users will become a smaller and smaller subgroup of computer users.
This elite class of computer 'brains' will be increasingly in demand for those cases where VB Programming 101 is not sufficient.
This elite class will be paid vast sums to keep the rest of the computer-using world happy (I can dream can't I? :-) )
Cheers,
Toby Haynes
Computers still haven't changed (Score:4, Insightful)
I think there hasn't been a new idea widely used in computing since the '70s! What gives?
Re:Computers still haven't changed (Score:4, Insightful)
I'd say they'd be slowing down... (Score:2)
The only thing left to compete on when the consumer don't need any new features, is cost. Windows apps are getting there, Windows itself isn't there yet, nor is Linux and their apps, but they're getting there and there's no competing with something that's free (BSD free or GNU free, doesn't matter much to the enduser). Look at Win2k (Pro) vs. WinXP Pro. What *good* corporate features are there? Damn close to none, and a whole lot of crap and eyecandy from the home edition that doesn't provide any business value whatsoever.
Kjella
I'd like to see it (Score:2, Informative)
Mosix does a pretty good job of balancing processing time, but won't split tasks that require shared memory, sockets, and is not fine grained enough to put threads on different machines. It also requires a simular kernel to run on all of the machines. But I run it now because it is the closest we have. I think it may catch on.
For distributed disk sharing, the closest we could find was Coda, although it has a few disadvantages also. You can't have very large volumes, its difficult to configure, it takes painfuly earned experience to use efficiently.
Mosix has its MFS, which gives everyone a shot at everyone's disk drive. This is an interesting possibility also, however it is not configurable. You can't lay the volumes down where you want them to be. It could be used.
But then, we could partitian available disk space to large network raids with network devices. GFS I believe works along this principle. Lower layered than Coda, but without the caching that I think lets the system work efficiently over the network.
I guess the funny thing is that I use and consider them them inspite of the challenges. Kind of like Linux in the 1.2.13 days. Ahh the good ol' days when "Hey we finaly got X working" would bring a round of congradulations from lab. "Oh no, the mouse doesn't work" would only mean we'd be happy to fumble around for another few hours with faith that it would eventually work, if we changed something somewhere.
Hey wait a minute. You know, maybe linux isn't dead like some have said. Maybe there is still software frontier to cover and being covered that we can download/compile and enjoy....
(Although I have yet to get a workable EROS kernel doing anything useful...)
This was being done in 1968 (Score:2, Informative)
I believe that it also used an intresting mechanism in which resource requests were allocated using an auction like mechanism - if one of the boxes needed to spawn a process it would put out an RFP and machines willing to undertake the job would offer bids with costs. A second committment phase bound the offer to the bid.
All this in the late 1960's.
VMS (Score:2, Interesting)
Predictable Predictions (Score:3, Interesting)
My strong belief is that the best "predictions" occur when you find something in use today - only too expensive for the home user - and "predict" it will be ubiquitous within a few years. So here are my completely predictable predictions.
Notice how all of my predictions sort-of exist already. This is what makes predictions so easy.
Hmm. Does BT know about this? SUE! (Score:3, Insightful)
'Nuff said.
Re:Mod Me Down If I'm Wrong..... (Score:3, Informative)
http://www.computerworld.com/computerworld/records /images/story/Farsite.gif [computerworld.com]
Was it just me or does the notion of a "Centralized file server" NOT sound like distributed computing to you?
Not being in possesion of any moderator points I am forced to respond to your comment....
If you were to have read the caption on the image, you would see that it says Logically: a centralized file server, but then it goes on to say Physically distributed among clients.
Re:Mod Me Down If I'm Wrong..... (Score:2)
ok....this is OT and all, but shouldn't it be "When I want your opinion I'll beat it into you"??
If you had read my sig you would know how I feel about your comments.
;)
Just kidding.
I will take your suggestions under advisement.
Re:Mod Me Down If I'm Wrong..... (Score:2)
Why is everyone so hot on distributed computing and storage? Relying on someone else to securely store data is ridiculous because the security model always fails to account for marketers, accountants, and CEOs (or anyone working for them).
Re:Mod Me Down If I'm Wrong..... (Score:4, Informative)
The first statement above makes perfect sense if you consider the second as axiomatic. However, the people working on these types of systems don't accept that axiom. Instead, they believe that cryptography-based security is just as strong as physical security...the odds that someone will factor a couple of hundred-digit numbers (or accomplish some equally difficult mathematical feat) are no higher than that they'll break into your home/office and steal your hardware. If they're right then there should be no problem with storing your files on some Iowa farmhand's computer (so long as you also have other replicas elsewhere for availability purposes), because Iowa Farmboy still can't access or modify your data without the right keys.
That's a big "if" you say. Well, yes it is. But if you want to make an argument that hardware security is the only real security, you'll need to show that cryptographically based systems aren't as secure as skilled and experienced implementors of such systems seem to think. Good luck.
Re:And the name of this new OS... (Score:2)
Re:nothing will happen in ten years (Score:2)
That's one trend. The other is that mass produced cheap, networked and mobile computers will be omnipresent. They will all be running some OS (not really relevant which one) and a vm that will make them general purpose. In addition, the network bandwidth will be such that you have easy access to huge amounts of server side storage.
All you need for omnipresent access to all your music is a fat harddrive and a 196 kbps network connection to it. Video requires a bit more but at 1mbps the quality is very acceptable. Mobile networks being capable of this are already planned will very likely be deployed worldwide and widely used in 10 years.
The networks are going to happen, the hardware is happening and the software and most of the concepts needed is already available today. All we need to do is put it together, perfect it a little (remove bugs, improve security, think a little more about privacy).