Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Operating Systems Still Matter In a Containerized World

Soulskill posted about 2 months ago | from the try-to-contain-yourself dept.

Operating Systems 129

New submitter Jason Baker writes: With the rise of Docker containers as an alternative for deploying complex server-based applications, one might wonder, does the operating system even matter anymore? Certainly the question gets asked periodically. Gordon Haff makes the argument on Opensource.com that the operating system is still very much alive and kicking, and that a hardened, tuned, reliable operating system is just as important to the success of applications as it was in the pre-container data center.

Sorry! There are no comments related to the filter you selected.

Docker needs an OS to run, duh! (2, Insightful)

Anonymous Coward | about 2 months ago | (#47709559)

Remember Matthew 7:26: A foolish man built his house on sand.

Re:Docker needs an OS to run, duh! (2)

DivineKnight (3763507) | about 2 months ago | (#47709651)

What does it say about condensed water vapor?

Re:Docker needs an OS to run, duh! (3, Funny)

perpenso (1613749) | about 2 months ago | (#47709693)

What does it say about condensed water vapor?

It varies. Sometimes it says beware. Other times it says that people prefer wine.

Re:Docker needs an OS to run, duh! (0)

Anonymous Coward | about 2 months ago | (#47709715)

What does it say about condensed water vapor?

That it smells a lot like traditional hardware, except for that faint hint of bullshit that keeps wafting by.

Re:Docker needs an OS to run, duh! (0)

Anonymous Coward | about 2 months ago | (#47710781)

I always thought that passage went: A foolish man built his house on sand, then put down a torch and triggered a block update.

Re:Docker needs an OS to run, duh! (1)

stiggle (649614) | about 2 months ago | (#47710875)

I thought it was "A foolish man built his house on sand, while the geek built his house in Minecraft."

Re:Docker needs an OS to run, duh! (2)

jandersen (462034) | about 2 months ago | (#47711265)

Remember Matthew 7:26: A foolish man built his house on sand.

- and what is silicon made from? ;-)

Re:Docker needs an OS to run, duh! (1)

ArcadeMan (2766669) | about 2 months ago | (#47711815)

Listen, lad. I've built this kingdom up from nothing. When I started here, all there was was swamp. All the kings said I was daft to build a castle in a swamp, but I built it all the same, just to show 'em. It sank into the swamp. So, I built a second one. That sank into the swamp. So I built a third one. That burned down, fell over, then sank into the swamp. But the fourth one stayed up. An' that's what your gonna get, lad -- the strongest castle in these islands.

Re:Docker needs an OS to run, duh! (1)

nycsubway (79012) | about 2 months ago | (#47711963)

I spent several minutes reading "What is it?" on docker's website, and I still don't understand what it is. Is it like a JVM?

People seem to be forgetting what a server is (1)

Anonymous Coward | about 2 months ago | (#47709561)

I blame the cloud.

Re:People seem to be forgetting what a server is (2, Funny)

Anonymous Coward | about 2 months ago | (#47709613)

Servers are for techie losers. The Cloud is the hip shit, bro.

Re:People seem to be forgetting what a server is (5, Funny)

DivineKnight (3763507) | about 2 months ago | (#47709643)

More along the lines of "they never knew what a server was, and would artfully dodge your phone calls, elevator meetings, and eye contact to avoid accidentally imbibing any knowledge that might furnish them with this understanding; all they know is that the slick salesman with the nice sports car and itemized billing said they'd magically do everything from their end and never bother them, and they believed them."

Re:People seem to be forgetting what a server is (0)

Anonymous Coward | about 2 months ago | (#47710877)

More along the lines of "they never knew what a server was, and would artfully dodge your phone calls, elevator meetings, and eye contact to avoid accidentally imbibing any knowledge that might furnish them with this understanding; all they know is that the slick salesman with the nice sports car and itemized billing said they'd magically do everything from their end and never bother them, and they believed them."

Which book is this quote from? Intriguing!

Re: People seem to be forgetting what a server is (5, Funny)

frikken lazerz (3788987) | about 2 months ago | (#47709745)

The server is the guy who brings me my food at restaurants. I guess people aren't eating at restaurants anymore because the economy is tough.

Re: People seem to be forgetting what a server is (2)

DivineKnight (3763507) | about 2 months ago | (#47709827)

+1, Funny for the BOFH style response.

Re: People seem to be forgetting what a server is (1)

ruir (2709173) | about 2 months ago | (#47710573)

Pity I do not have mod points. Excellent sense of humour sir. Congrats.

Re:People seem to be forgetting what a server is (1)

ILongForDarkness (1134931) | about 2 months ago | (#47710851)

Deal with client side developers all the time asking for 100MB of data "right now" across an internet pipe (which might be coming from africa or some place with really bad service): why shouldn't we get all the data at the same time? It seems to me that a lot of the performance tuning knowledge is getting lost on a large percentage of devs: the solution is always get someone to get you a fatter internet pipe, bigger server, drop everything and try a new framework etc. Server side developers do it too: "we have a large instance on AWS I guess that is as fast as we go". Very easy to throw your hands up with performance issues because it burns so much time troubleshooting and experimenting versus hacking together the next feature that the customer wants.

Re:People seem to be forgetting what a server is (1)

Lennie (16154) | about 2 months ago | (#47711799)

one way or the other this is going to solve it self, right ?

Or pipes, etc. get a lot bigger (things like silicon photonics in the data center and NVRAM will help) or people with more knowledge of the problem will find a better job.

wow (0)

Anonymous Coward | about 2 months ago | (#47709593)

Truly a remarkable insight! We all thought OSes would die and disappear like the dinosaurs did back in the day.

Re:wow (1)

DivineKnight (3763507) | about 2 months ago | (#47709625)

See it with me: one day, we will have clouds inside clouds!

Re:wow (1)

Anonymous Coward | about 2 months ago | (#47709721)

See it with me: one day, we will have clouds inside clouds!

We run VM managers as VMs today. This is old hat really.

Of Course They Do! (5, Interesting)

Anonymous Coward | about 2 months ago | (#47709607)

Stripped to the bone, an operating system is a set of APIs that abstract the real or virtual hardware to make applications buildable by mere mortals. Some work better than others under various circumstances, so the OS matters no matter where it's running.

Re:Of Course They Do! (5, Interesting)

DivineKnight (3763507) | about 2 months ago | (#47709629)

I can't wait for programmers, sometime in 2020, to rediscover the performance boost they receive running an OS on 'bare metal'...

Re:Of Course They Do! (1)

Urkki (668283) | about 2 months ago | (#47709847)

Except there will be no performance boost. There may be a blip in some benchmark.

Additionally, programmers are already running *application code* on bare metal when that kind of performance matters, most commonly on GPUs.

Re:Of Course They Do! (-1)

Anonymous Coward | about 2 months ago | (#47709919)

Not true. You lose 20% of the RAM and 30% of the CPU capacity by running any Virtual Machine, even if the virtual machine takes up all the resources. Virtual Machines have to do everything in software, so emulating network cards and disk drives burns a lot of cpu cycles.

Re:Of Course They Do! (1)

putaro (235078) | about 2 months ago | (#47709963)

It's kind of silly but no worse than network file systems. And, containers don't have that virtualization overhead.

Re:Of Course They Do! (5, Informative)

philip.paradis (2580427) | about 2 months ago | (#47711005)

Modern virtualization doesn't have the overhead the GP cited; the 20% RAM loss and 30% CPU capacity loss numbers cited by the AC you responded to are absurd fabrications. I use KVM on Debian hosts to power a large number of VMs running a variety of operating systems, and the loss of CPU bandwidth and throughput with guests is negligible due to hardware virt extensions in modern CPUs (where "modern" in fact means "most 64-bit AMD and Intel CPUs from the last few years, plus a small number of 32-bit CPUs"). Using the "host" CPU setting in guests can also directly expose all host CPU facilities, resulting in virtually no losses in capabilities for mathematically-intensive guest operations. As far as memory is concerned, far from resulting in a 20% loss of available RAM, I gain a significant amount of efficiency in overall memory utilization using KSM [linux-kvm.org] (again, used with KVM). On a host running many similar guests, extremely large gains in memory deduplication may be seen. Running without KSM doesn't result in significant memory consumption overhead either, as KVM itself hardly uses any RAM.

The only significant area of loss seen with modern virtualization is disk IO performance, but this may be largely mitigated through use of correctly tuned guest VM settings and updated VirtIO drivers. The poster you replied to is ignorant at best, and trolling at worst.

Re:Of Course They Do! (0)

Anonymous Coward | about 2 months ago | (#47711331)

You also have GPU losses. But, hey, your mileage may vary quite a bit on that- because there *IS* overhead penalties with all of that- your stuff's just not utilizing it as effectively as you're deluding yourself.

Mods, calling this informative? Lay down the damned crack pipe.

Virtualization is a great thing, don't get me wrong. It is a good way to leverage machine resources on an under utilized machine. Is it as fast as bare metal? NO. And anybody telling you that it is...they're needing to lay down the damned crack pipe too.

Re:Of Course They Do! (1)

philip.paradis (2580427) | about 2 months ago | (#47711451)

I like how you didn't bother to directly respond to any of the points listed. Your apparent inability to properly tune a virtualization host and its guests is your problem, and not reflective of the current abilities offered by modern virtualization systems. To address your point regarding GPU losses, if you're really that concerned about such issues on server systems, you're welcome to give host passthrough a shot with one of your guests; a lot of nice work has been done in that area lately. That said, these are indeed server systems we're talking about, and for the majority of use cases the only displays involved are the odd VNC/RDP session. What was your point, again?

Re:Of Course They Do! (1)

ILongForDarkness (1134931) | about 2 months ago | (#47711583)

Admittedly anecdotal but when working from home a few months back I had issues with the VPN software not liking win 8. I ended up running virtualbox + xp. I noticed the network performance was way slower (things like loading webpages) when using the vm (even when not connected to the vpn) vs "local" machine. I had GPU optimizations turned on, I had all cores and a lot of ram allocated to the vm and didn't appear to be resource constrained (va task man/perf mon on the host os) etc. There does seem to be a performance impact as far as I can tell. Could you tune it better? Use better vm/host OS etc? I'm sure you can. But it isn't as simple as "modern CPUs/software avoid the penalty" to paraphrase your earlier post. There is work involved at least to tweak it work that doesn't appear to be required to get equivalent performance on a stock "bare metal" install. At any rate my experience has been it is usually good enough not best performance you get. When consolidating a large number of fairly under utilized boxes onto one server sure, but a db or file server that needs to be fast our your whole org feels the latency? Not so much. Hardware/IT time is relatively cheap compared to hundreds of people losing > 1 min a day each.

Re:Of Course They Do! (0)

Anonymous Coward | about 2 months ago | (#47711865)

He was using WINDOWS. Pfffft. Please, just don't even comment on this stuff. You clearly have no idea what you are talking about.

Re:Of Course They Do! (0)

Anonymous Coward | about 2 months ago | (#47712255)

Lol... You are using your gpu for servers?

Re:Of Course They Do! (1)

Junta (36770) | about 2 months ago | (#47712485)

In my experience, KSM hasn't helped as much as it promised. It depends heavily upon the workloads. It also impacts memory performance. If things are such that KSM can be highly effective, then a container solution would probably be more prudent.

what are you smoking? (4, Interesting)

Chirs (87576) | about 2 months ago | (#47710017)

Anything performance-sensitive isn't going to use emulation but rather paravirtualization or passthrough of physical devices. Current x86 virtualization is getting pretty good, with minimal hit to CPU-intensive code. As for I/O, you can pass through PCI devices in to the guest for pretty-much native networking performance.

Disk I/O still isn't as good as native, but it's good enough, and most enterprise systems are using ISCSI anyway to allow for efficient live migration.

Re:what are you smoking? (1)

ruir (2709173) | about 2 months ago | (#47710595)

Good point sir. I would like to add I am also paravirtualizing disk acess in vmware, however I concede I/O is clearly the bottleneck.

Re:what are you smoking? (1)

Anonymous Coward | about 2 months ago | (#47710671)

iSCSI is absurdly, ungodly slow unless you use accelerators on both the initiator and the target. Far better to use ATAoE, and if you do invest in accelerators, it's far better to go for FCoE or RoCE.

Technically iSCSI is a lousy protocol, the only reason it's got any traction is because of a bunch of industry vested interests. Don't believe me, go and read the spec, or try benchmarking ATAoE and iSCSI side by side.

We have 5000 nodes running ATAoE. Not exactly a large site, but not exactly a small site either. When designing the system I tested both iSCSI and ATAoE and iSCSI ate 4x the CPU for the same IO loads and peaked at 1/8th the IOPS when saturated.

iSCSI: packet based messaging over TCP over packet based networking, and still requires a dedicated network for any semblance of security. What a joke.

Re:what are you smoking? (3, Informative)

serviscope_minor (664417) | about 2 months ago | (#47710843)

Yeah but there's the memory penalty, and the conflicting CPU schedulers.

If you have 20VMs basically running the same code, then all of the code segments are going to be the same. So, people are doing memory deduplication. Of course that's inefficient, so I expect people are looking at paravirtualising that too.

That way you'll be able to inform the VM sysrem that you're loading an immutable chunk of code and if anyone else want's to use it their free to. So it becomes an object of some sort which is shared.

And thus people will have inefficiently reinvented shared objects, and will probably index them by hash or something.

The same will happen with CPU scheduling too. The guest and host both have ideas who wants CPU when. The guests can already yield. Sooner or later they'll be able to inform the host that they want some CPU too.

And thus was reinvented the concept of a process with threads.

And sooner or later, people will start running apps straight on the VM because all these things it provides are basically enough to run a program so why bother with the host OS. Or perhaps they won't.

But either way people will find that the host OS becomes a bit tied down to a particular host (or not---and thus people reinvent portability layers) and that makes deployment hard so wouldn't it be nice if we could somehow share just the essentials of the hardware between multiple hosts to fully utilise our machines.

Except that's inefficient and there's a lot of guess work so if we allow the hosts and the host-hosts to share just a liiiiiiiitle bit of information we'll be able to make things much more efficient.

And so it continues.

Re:what are you smoking? (0)

Anonymous Coward | about 2 months ago | (#47711351)

Anything performance sensitive will be using it's own hardware. There's tradeoffs with everything. Virtualization is leveraging bigger Iron to better utilization through creating the illusion of multiple OSes with a thin shim of an OS for all intents and purposes and running other OSes as applications ON the same. Anything truly performance sensitive will run on it's own hardware with the tradeoff of taking up more space, electricity, cooling, etc.

So...like I said earlier in a post above this...if you think that there's not a cost/tradeoff, and one that isn't something to think about...you need to lay down the damned crack pipe.

Re:what are you smoking? (1)

Builder (103701) | about 2 months ago | (#47712365)

It's worth mentioning that device passthrough requires CPU extensions and motherboard support, and this doesn't seem well supported outside of solid server gear.

Re:what are you smoking? (1)

Junta (36770) | about 2 months ago | (#47712505)

As for I/O, you can pass through PCI devices in to the guest for pretty-much native networking performance.

Of course, that comes with its own headaches and negates some of the benefits of a VM architecture. Paravirtualized networking is however pretty adequate for most workloads.

It's not like you have to do VM *or* baremetal across the board anyway. Use what makes sense for the circumstance.

Re:Of Course They Do! (4, Informative)

Urkki (668283) | about 2 months ago | (#47710129)

First, assumption is that we're talking about the kind of virtual machines people run in VirtualBox etc, using the native CPUs etc. IOW, not talking about emulators like QEMU.

VM host RAM overhead is essentially static, while VM guest memory sizes go up along with all memory sizes, so actually RAM overhead asymptotically approaches 0%.

30% CPU, just how do you get that number? Virtual memory page switches etc may have some overhead in VM maybe, I don't know, but normal application code runs at the raw CPU just like code on the host OS.

And there's normally no emulation of hardware, there's just virtualization of hardware in the normal use cases. Hardware can also be directly connected to the VM at the lowest possible level, bypassing most of the host OS driver layers (non-performance-related, this is very convenient with mice and keyboards in multi-monitor setups, where each monitor can have a VM in full screen with dedicated kb&mouse in front of it, no more looking at one VM while focus is in another).

Re:Of Course They Do! (0)

Anonymous Coward | about 2 months ago | (#47711147)

"VM host RAM overhead is essentially static, while VM guest memory sizes go up along with all memory sizes, so actually RAM overhead asymptotically approaches 0%"

It's this sort of BS that as an EE makes me want to slap every god damned CS person on earth. Do you realize how fucking stupid that is? Yes, if you have infinite RAM, it approaches 0. But want to guess what you will never have? And guess what, each instance of the VM uses that same static amount of memory. And the big selling point is, one server, multiple OSs, and with each copy, you're sucking up memory. If you're running one server with one OS in a virtual machine, you're doing it wrong. At that point why not just run it natively?

And fact is, there's a performance hit with VMs, and it can be quite bad. I use them personally because they're convenient, but lets not delude ourselves. VMs are markedly slower than native. If you deny this at any level, you've never used a VM next to native. And the CPU hit comes from that VMs prefer to run SCSI and often can't use native graphics hardware, so as such, if you're using personal computers with GUIs, all that stuff tends to have to be emulated and emulation is slow. Now yes, with a server where there is no GUI, you don't have to do that thus saving you that bit of emulation, but there's still multiple schedulers running with multiple memory managers with multiple all of the OS overhead. There's no way to get away from any of that.

Re:Of Course They Do! (1)

Anonymous Coward | about 2 months ago | (#47710169)

And for VoIP the unpredictable scheduling you get by running within a VM can have a massive negative impact on audio quality. And gets even worse if another VM decides it needs more CPU (stolen CPU time).

It's bad for anything that needs low latency or reliable timing.

Re:Of Course They Do! (0)

ruir (2709173) | about 2 months ago | (#47710587)

I guess you have not heard of paravirtualization in XEN, and vmtools/vmxnet/paravirtualized disks in VmWare. I would say with all the current technologies, the CPU loss is more around 10-15% nowadays.

Re:Of Course They Do! (1)

Junta (36770) | about 2 months ago | (#47712475)

CPU throughput impact is nearly undetectable nowadays. Memory *capacity* can suffer (you have overhead of the hypervisor footprint), though memory *performance* can also be pretty much on par with bare metal memory.

On hard disks and networking, things get a bit more complicated. In the most naive way, what you describe is true, a huge loss for emulating devices. However paravirtualized network and disk is pretty common which brings it in the same ballpark as not being in a VM. But that ballpark is relatively large, you still suffer significantly in the IO department in x86 virtualization despite a lot of work to make that less the case.

Of course, VM doesn't always make sense. I have seen people make a hypervisor that ran a single VM that pretty much required all the resources of the hypervisor and no other VM could run. It was architected such that live migration was impossible. This sort of stupidity makes no sense, pissing away efficiency for no gains.

Re:Of Course They Do! (1)

gmuslera (3436) | about 2 months ago | (#47711123)

The point of Docker and containers in general is that they are running at basically native performance. There is no vm, no virtualized OS, you run under the main OS kernel, but it don't let you see the main OS filesystem, network, processes and so on, and don't let you do operations risky for the stability of the main system. There is some overhead in the filesystem access (in the case of docker, you may be running on AUFS, device mapper, or others that will have different kind of impact in several operations), but still is a far cry from VMs using a filesystem on a file of the main system with its own filesystem driver.

Re:Of Course They Do! (1)

Urkki (668283) | about 2 months ago | (#47709833)

No, stripped to the bone, operating system offers no APIs at all, and it will not run any user applications. It will just tend to itself. Then you add some possibilities for user applications to do things, the less the better, from security and stability point of view. Every public API is a potential vulnerability, a potential window to exploit some bug.

Re:Of Course They Do! (1)

phantomfive (622387) | about 2 months ago | (#47709955)

No, stripped to the bone, operating system offers no APIs at all, and it will not run any user applications.

Uh, what would be the point of such an operating system?

Re:Of Course They Do! (2)

Urkki (668283) | about 2 months ago | (#47710143)

No, stripped to the bone, operating system offers no APIs at all, and it will not run any user applications.

Uh, what would be the point of such an operating system?

Point would be to have a stripped to the bone OS.

Actually it's kind of same as having a stripped to the bone animal (ie. skeleton): you can for example study it, put it on display, give it to the kids to play with... ;)

Re:Of Course They Do! (1)

phantomfive (622387) | about 2 months ago | (#47710165)

How would you even know if it's running?

Re:Of Course They Do! (3, Funny)

perpenso (1613749) | about 2 months ago | (#47710297)

How would you even know if it's running?

The morse code on an LED

Re:Of Course They Do! (1)

Urkki (668283) | about 2 months ago | (#47710413)

How would you even know if it's running?

Well, for the totally barebone version, you could run it in a VM and examine its memory contents there.

I think even barebone OS would need *some* functionality. It would have to be able to shut itself down, on PC probably by ACPI events. It would probably need to be able to start the first process/program, because I think an OS has to be able to do that, even if that process then wouldn't be able able to do anything due to lack of APIs. Etc. So even barebone, it still needs to do something.

More practical than examining a VM, much like physical skeletons on display, most likely there would be some extra support to be able to see what is going on. Equivalent of wires and rods for a real skeleton would be some kind of debug features for the barebone OS: display messages on screen or over RS232, possibly accept some commands like reboot or dump information, even provide machine language debugger/disassembler.

Re:Of Course They Do! (0)

Anonymous Coward | about 2 months ago | (#47711757)

"No, stripped to the bone, operating system offers no APIs at all"
I think we called those Kernels and it's already done in the linux and bsd world not windows.

Re:Of Course They Do! (1)

CastrTroy (595695) | about 2 months ago | (#47711341)

Exactly. And the limitations of an OS can very much determine how an application can perform and what it can do. With Windows tablets, both RT and Pro, any application that can read files can automatically read shard network folders and OneDrive, because it's been abstracted away properly from the application.

Contrast that with Android and iOS, where this functionality isn't abstracted away from the application, and any application that wants to access a network drive or the default cloud drive (Google Drive and Apple Cloud) has to implement the functionality themselves. iOS doesn't even present a traditional file system to the application which drastically changes how programs interact with data.

Android and iOS both (in stock configuration) don't allow mutliple applications on the screen at the same time, which limits how applications can behave. On a Windows tablet, the mail client will open links in a second window, leaving the user able to interact with both the page which has been opened, and the email client the same time. You can kind of emulate that with a part of the mail application that loads the web page, but that page doesn't get saved in your browser history, because it's not the actual browser that the user would normally use.

Re:Of Course They Do! (0)

Anonymous Coward | about 2 months ago | (#47712057)

Most modern operating systems aren't just an API. They also handle task scheduling and a few other housekeeping duties.

Operating systems that are "just an API" are considered archaic now. Examples are: DOS and "classic" MacOS. (Technically, Windows in it's pre-95 incarnation was a shell that ran on DOS, not an operating system, otherwise it would get mentioned here as well.)

Advert? (5, Insightful)

Anonymous Coward | about 2 months ago | (#47709623)

Is this just an advert for Docker?

Re:Advert? (4, Interesting)

ShanghaiBill (739463) | about 2 months ago | (#47709819)

Is this just an advert for Docker?

Yes. They refer to the "rise" of Docker, yet I had never heard of it before. Furthermore, Docker doesn't even fit with the main point of TFA that "the OS doesn't matter". Here is a complete, exhaustive list of all the OSes that Docker can run on:

1. Linux

Re:Advert? (1)

jbolden (176878) | about 2 months ago | (#47709887)

Docker is legit and important. There are a 1/2 dozen of these containerized OSes. Docker is the most flexible (it runs on a wide range of Linuxes while most of them are specific to a particular cloud vendor). It is also the most comprehensive though SoftLayer's and Azure's might pass it in that regard. A Docker container is thicker than a VM, thinner than a full Linux distribution running on a VM. It is more accurate to consider Docker an alternative to VMs and Linux distributions running in each VM.

The Docker platform doesn't actually interface with hardware it relies on Linux to do that, the same way that Google Play makes use of Android to address the actual physical hardware. I think it is accurate to say that Docker is a flavor of Linux.

Re:Advert? (2)

invalid-access (1478529) | about 2 months ago | (#47709939)

In what sense is a Docker container thicker than a VM? I always thought it was thinner/lighter - e.g. A host can allocate varying amounts of memory to a container (with optional limits). Whereas running a VM will always put you back that much memory on its host.

Re:Advert? (1)

jbolden (176878) | about 2 months ago | (#47710011)

The Docker Engine is much thicker than a hypervisor essentially containing the full suite of services of the guest OS.

Re:Advert? (1)

Anonymous Coward | about 2 months ago | (#47710623)

I don't understand what you mean.

Docker is nothing more than a configuration interface for Linux Containers (a feature of the Linux kernel). The engine is not an hypervisor. A "dockerized" VM could be seen as a chrooted directory (or mountpoint) with its own PIDs, FDs, sub-mountpoints, network interfaces, etc.
It shares the kernel of the "real machine". It's also based on the "real kernel" services for everything.

I doubt there could be anything lighter.

It just has its own init, so everything inside the VM is (nearly) completely isolated from other VMs.

Re:Advert? (0)

Anonymous Coward | about 2 months ago | (#47710735)

I don't understand what you mean.

Docker is nothing more than a configuration interface for Linux Containers (a feature of the Linux kernel). The engine is not an hypervisor. A "dockerized" VM could be seen as a chrooted directory (or mountpoint) with its own PIDs, FDs, sub-mountpoints, network interfaces, etc.
It shares the kernel of the "real machine". It's also based on the "real kernel" services for everything.

I doubt there could be anything lighter.

It just has its own init, so everything inside the VM is (nearly) completely isolated from other VMs.

If we're really going to boil this down, I believe what the parent was referring to was possibly that Docker will consume more space on the disk than something like VMWare, which is pretty damn tiny.

And since it sounds like it's more of a hybrid system (requiring Linux and it's Container features instead of standing alone), you should probably include the base Linux OS install size + Docker in order to get an accurate idea of the total footprint required to run it.

I highly doubt that configuration is as small as other offerings.

Re:Advert? (0)

Anonymous Coward | about 2 months ago | (#47711823)

Seriously? You're going to use disk space as a measure?

Who cares about disk space anymore?

The stuff that matters (cpu, memory, network isolation) is what it solves, and with no overhead.

Who gives a shit about how much disk space is consumed at this point

Re:Advert? (1)

jbolden (176878) | about 2 months ago | (#47711323)

Containers play the role of VMs. They are competing paradigms for how to deploy services. The Docker engine is responsible for allocating resources to, terminating and starting containers which is what the hypervisor does.

Re:Advert? (1)

visualight (468005) | about 2 months ago | (#47711597)

Docker has an engine? I haven't actually used Docker yet because I've already been using LXC for some years and just haven't had a "free day" to play with it. But I've always been under the impression that Docker was just a abstraction around LXC making containers easier to create. Is the Docker "engine" actually LXC?

Serious question because the main reason I haven't invested in Docker is my perception that it won't really save (me) time if you already well understand LXC. Is there some other benefit besides "it's easier"?

Re:Advert? (2)

jbolden (176878) | about 2 months ago | (#47711703)

No the engine uses LXC as a component. There is a lot more to Docker then just LXC. But this comes up a lot so it is in the FAQ and I'll just quote the FAQ: Docker is not a replacement for LXC. "LXC" refers to capabilities of the Linux kernel (specifically namespaces and control groups) which allow sandboxing processes from one another, and controlling their resource allocations. On top of this low-level foundation of kernel features, Docker offers a high-level tool with several powerful functionalities:

Portable deployment across machines. Docker defines a format for bundling an application and all its dependencies into a single object which can be transferred to any Docker-enabled machine, and executed there with the guarantee that the execution environment exposed to the application will be the same. LXC implements process sandboxing, which is an important pre-requisite for portable deployment, but that alone is not enough for portable deployment. If you sent me a copy of your application installed in a custom LXC configuration, it would almost certainly not run on my machine the way it does on yours, because it is tied to your machine's specific configuration: networking, storage, logging, distro, etc. Docker defines an abstraction for these machine-specific settings, so that the exact same Docker container can run - unchanged - on many different machines, with many different configurations.

Application-centric. Docker is optimized for the deployment of applications, as opposed to machines. This is reflected in its API, user interface, design philosophy and documentation. By contrast, the lxc helper scripts focus on containers as lightweight machines - basically servers that boot faster and need less RAM. We think there's more to containers than just that.

Automatic build. Docker includes a tool for developers to automatically assemble a container from their source code, with full control over application dependencies, build tools, packaging etc. They are free to use make, maven, chef, puppet, salt, Debian packages, RPMs, source tarballs, or any combination of the above, regardless of the configuration of the machines.

Versioning. Docker includes git-like capabilities for tracking successive versions of a container, inspecting the diff between versions, committing new versions, rolling back etc. The history also includes how a container was assembled and by whom, so you get full traceability from the production server all the way back to the upstream developer. Docker also implements incremental uploads and downloads, similar to git pull, so new versions of a container can be transferred by only sending diffs.

Component re-use. Any container can be used as a "base image" to create more specialized components. This can be done manually or as part of an automated build. For example you can prepare the ideal Python environment, and use it as a base for 10 different applications. Your ideal Postgresql setup can be re-used for all your future projects. And so on.

Sharing. Docker has access to a public registry where thousands of people have uploaded useful containers: anything from Redis, CouchDB, Postgres to IRC bouncers to Rails app servers to Hadoop to base images for various Linux distros. The registry also includes an official "standard library" of useful containers maintained by the Docker team. The registry itself is open-source, so anyone can deploy their own registry to store and transfer private containers, for internal server deployments for example.

Tool ecosystem. Docker defines an API for automating and customizing the creation and deployment of containers. There are a huge number of tools integrating with Docker to extend its capabilities. PaaS-like deployment (Dokku, Deis, Flynn), multi-node orchestration (Maestro, Salt, Mesos, Openstack Nova), management dashboards (docker-ui, Openstack Horizon, Shipyard), configuration management (Chef, Puppet), continuous integration (Jenkins, Strider, Travis), etc. Docker is rapidly establishing itself as the standard for container-based tooling.

Re:Advert? (1)

Anonymous Coward | about 2 months ago | (#47712445)

You are mistaken with regard to the relationship Docker has (technically, had) with LXC. When Docker was originally created, it basically sat on top of LXC and used its capabilities for containers. Nowadays, it uses libcontainer underneath its abstractions and doesn't use LXC at all.

Re:Advert? (0)

Anonymous Coward | about 2 months ago | (#47711371)

So, it's kind of like UML, then, right?

Re:Advert? (0)

Nimey (114278) | about 2 months ago | (#47711773)

So because you personally have never heard of Docker before, this story must be a slashvertisement?

That's some interesting logic.

Re:Advert? (1)

ArcadeMan (2766669) | about 2 months ago | (#47711847)

So because you personally knew about Docker before, this means everybody should know about it too?

That's some interesting logic.

Re:Advert? (0)

Nimey (114278) | about 2 months ago | (#47712205)

https://yourlogicalfallacyis.c... [yourlogicalfallacyis.com]

Here's a nickel, kid. Buy yourself a better argument.

Re:Advert? (1)

ArcadeMan (2766669) | about 2 months ago | (#47712239)

It was the same argument as your own.

Re:Advert? (0)

Nimey (114278) | about 2 months ago | (#47712273)

I suppose it might seem that way to an idiot.

Re:Advert? (0)

Anonymous Coward | about 2 months ago | (#47712593)

Is this just an advert for Docker?

Yes. They refer to the "rise" of Docker, yet I had never heard of it before. Furthermore, Docker doesn't even fit with the main point of TFA that "the OS doesn't matter". Here is a complete, exhaustive list of all the OSes that Docker can run on:

1. Linux

They mean the OS as in (one of) the layer between your application and the hardware. That layer is getting less important.

Vote For PEDRO (0)

Anonymous Coward | about 2 months ago | (#47709641)

All of your WILDEST dreams will come true!

Everything new is old (5, Insightful)

starfishsystems (834319) | about 2 months ago | (#47709657)

"The operating system is therefore not being configured, tuned, integrated, and ultimately married to a single application as was the historic norm, but it's no less important for that change."

What? I had to read this a couple of times. The historic norm was for a single operating system to serve multiple applications. Only with the advent of distributed computing did it become feasible, and only with commodity hardware did it become cost-effective, to dedicate a system instance to a single application. Specialized systems for special purposes came into use first, but the phenomenon didn't really begin to take off in a general way until around 1995.

Re:Everything new is old (1)

Nyder (754090) | about 2 months ago | (#47709695)

"The operating system is therefore not being configured, tuned, integrated, and ultimately married to a single application as was the historic norm, but it's no less important for that change."

What? I had to read this a couple of times. The historic norm was for a single operating system to serve multiple applications. Only with the advent of distributed computing did it become feasible, and only with commodity hardware did it become cost-effective, to dedicate a system instance to a single application. Specialized systems for special purposes came into use first, but the phenomenon didn't really begin to take off in a general way until around 1995.

Going to point out in the DOS days, you've have different memory setups for different stuff. Plenty of apps (mostly games though) required a reboot to get the correct memory manager set up for it. Granted, this was a 640k barrier problem, and the main OS didn't actually load/not load anything different, just the memory managers and possible 3rd party programs.

Even back on the C64 days you'd have to reboot the computer after playing a game, since you didn't normally have a way to exit it. Granted that was because floppies were how things were done.

But I get what you are saying, just wanted to point out some history in case we got some of them young kids reading this stuff and think computers started in the 2000's after Gore invented the Internet.

Re:Everything new is old (2)

perpenso (1613749) | about 2 months ago | (#47709725)

"Personal Computers" not "computers". We also had mainframe, minicomputers and workstations that were pretty good at running multiple programs in parallel.

Re:Everything new is old (1)

Joe_Dragon (2206452) | about 2 months ago | (#47709849)

OS2 was real good at running older dos apps / games.

Re:Everything new is old (1)

putaro (235078) | about 2 months ago | (#47709969)

Despite the name, DOS was not an operating system

Re:Everything new is old (0)

Anonymous Coward | about 2 months ago | (#47711177)

Says who? You?

You seem to forget, ask 10 different system architects what an OS is and what its responsibilities are and you'll probably get 100 different answers. And yes, the order of magnitude difference there is intentional.

Re:Everything new is old (1)

Darinbob (1142669) | about 2 months ago | (#47710115)

DOS was just a niche, not even a real OS, and only around for a small fraction of the time operating systems have been around. Unless you mean mainframe DOS instead of PC stuff. Even by the standards of a time it wasn't an OS.

Re:Everything new is old (1)

starfishsystems (834319) | about 2 months ago | (#47710285)

To clarify a bit, I was referring to the period between 1960 and today, when multiprocessing systems established what could properly be called the "historic norm" for the industry. That's the lineage, starting with mainframes, which led directly to virtualization. In fact we were working on primitive virtualization and hypervisors even then, though for the sake of faster system restarts, live failovers and upgrades rather than anything like the cloud services of today. I hadn't thought to include hobbyist systems in this account because they're not really part of this lineage. It was a long time before they became powerful enough to borrow from it. What they did contribute was an explosion in commodity hardware, so that when networking became ubiquitous it became economical to dedicate systems to a single application. But that comes quite late in the story.

Re:Everything new is old (0)

Anonymous Coward | about 2 months ago | (#47710577)

ICL Mainframe Computers and VMS.
You build and compiled the virtual OS to have ONLY what was needed when RAM might have been 4 or so megabytes - this way nothing was wasted. Today's OS's are littered with code that rarely gets used - junk, and as junk, that is a security issue.
TRS80 , Apple II, DOS or Busybox show how to be frugal.

What is old is also new (1)

Anonymous Coward | about 2 months ago | (#47709687)

FreeBSD and Solaris et al have been doing OS level virtualization for years, let's ask them about host security / tuning and build on their experience.
FreeBSD and illumos are also both open with far more experience in this area.
Singling out Linux as the operating system of choice makes him look like a tool.

Hardened Operating Systems (1)

ka9dgx (72702) | about 2 months ago | (#47709697)

Instead of trying to harden an OS, why not use a system designed to be secure from the start, one that supports multilevel security [wikipedia.org] . The technology was created in response to data processing demands during the Viet Nam conflict, and perfected during the 70s and 80s.

Re:Hardened Operating Systems (1)

easyTree (1042254) | about 2 months ago | (#47709933)

Windows 8 ? :D

Re:Hardened Operating Systems (1)

rjr3 (658693) | about 2 months ago | (#47710941)

Both are equally important.

So marketing for Docker (0, Interesting)

Anonymous Coward | about 2 months ago | (#47709731)

Yet another containerized OS independent thingy....

Look, these OS wrappers all fail because ultimately they deliver the subset of an OS, and are 1 generation behind. So the OSes add features and the container adds it when *all* of the underlying OS's they support have adopted similar features enabling the wrapper to add it.

So they're always behind, and always a subset of functionality.

And nobody uses them, well except for a few niche apps, because being behind your competitors isn't a viable option.

Marketing docker this way won't help it. It faces this big problem and its not a new problem and they're not the first to try this.

So marketing for Docker (0)

Anonymous Coward | about 2 months ago | (#47709895)

Yet another containerized OS independent thingy....

How is Docker OS independent? Looks like it only runs on Linux.

Who's wondering this? (1)

rebelwarlock (1319465) | about 2 months ago | (#47709795)

Was anyone really wondering if operating systems no longer mattered? Might as well have gone with "Nothing is different" as your headline.

Re:Who's wondering this? (2)

timeOday (582209) | about 2 months ago | (#47709869)

I question it. When you're running a database implemented in Java on a filesystem in an OS inside a VM on a filesystem inside another OS on virtual memory/paging hardware, that's 8 levels of largely redundant access control / containerization / indirection. It's a supreme mess and imposes a big burden of runtime cost and more importantly the burden of configuring all those layers of access control.

Re:Who's wondering this? (1)

putaro (235078) | about 2 months ago | (#47709971)

Some people like nested virtual machines, some people like candy colored buttons. What else are you going to do with all those resources? :-)

How about getting it to work on an OS first? (1)

Anonymous Coward | about 2 months ago | (#47709975)

Dear Docker, can you make it work on my windows machine? Your scripts don't work.

Nomenclature Abuse (-1)

Anonymous Coward | about 2 months ago | (#47710135)

"servers in containerized world" suggest containerized servers, which are servers that are shipped in a shipping container and designed to be used while permanently still in the container.

The article is apparently about something else. The summary makes none of this clear.

Misleading (0)

Anonymous Coward | about 2 months ago | (#47710479)

And i came here expecting discussion about actual docks and containers. Left dissapointed.

Re:Misleading (1)

ChunderDownunder (709234) | about 2 months ago | (#47710527)

I was expecting a discussion about Fremantle's win over Hawthorn on the weekend. :)

Of course (1)

Anonymous Coward | about 2 months ago | (#47711781)

Even if you store your data in "the cloud"; that data is stored on a server someplace, and it has to have an operating system.

A horrible nightmare... (2)

Junta (36770) | about 2 months ago | (#47712421)

So to the extent this conversation does make sense (it is pretty nonsensical in a lot of areas), it refers to a phenomenon I find annoying as hell: application vendors bundle all their OS bits.

Before, if you wanted to run vendor X's software stack, you might have to mate it with a supported OS, but at least vendor X was *only* responsible for the code they produced. Now increasingly vendor X *only* releases an 'appliance and are in practice responsible for the full OS stack despite having no competency to be in that position'. Let's see the anatomy of a recent example of critical update, OpenSSL.

For the systems where the OS has applications installed on top, patches were ready to deploy pretty much immediately, within days of the problem. It was a relatively no-muss affair. Certificate regeneration was an unfortunate hoop to go through, but it's about as painless as it could have been given the circumstances.

For the 'appliances', some *still* do not even have an update for *Heartbleed* (and many more didn't bother with the other OpenSSL updates). Some have updates, but only in versions that also have functional changes in the application that are not desired, and the vendor refuses to backport the relatively simple library change. In many cases, applying an 'update' actually resembles a reinstall. Having to download a full copy of the new image and doing some 'migration' work to have data continuity.

Vendors have traded generally low amounts of effort in initial deployment for unmaintainable messes with respect to updates.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?