The original topic was SteamOS.
I haven't been looking over this that much, Now it just looks like general Linux discussion...
Because SteamOS is a linux distro, and as such the direction valve wants to push is linux into the mainstream-- which is something that has been tried and failed for decades now.
Because SteamOS is a linux distro, and as such the direction valve wants to push is linux into the mainstream-- which is something that has been tried and failed for decades now.
I heard from somewhere (Just a rumor!) that Canonical are forking the Kernal!
Could they be the ones? Either way Valve will just fork Ubuntu and get the free ride.
I heard from somewhere (Just a rumor!) that Canonical are forking the Kernal!
Could they be the ones? Either way Valve will just fork Ubuntu and get the free ride.
SteamOS is almost guaranteed to just be ubuntu in a custom UI.
I don't think any of the kernel level stuff is going to happen tbh.
I heard from somewhere (Just a rumor!) that Canonical are forking the Kernal!
Could they be the ones? Either way Valve will just fork Ubuntu and get the free ride.
Cononical already forked the kernel the Ubuntu kernel is modified slightly from the stock one.
SteamOS could be based on Ubuntu like mint or it could be based on just debian like Ubuntu.
Cononical already forked the kernel the Ubuntu kernel is modified slightly from the stock one.
SteamOS could be based on Ubuntu like mint or it could be based on just debian like Ubuntu.
It would likely be assumed they follow the master branch Debian instead of an offset branch of Debian.
It would likely be assumed they follow the master branch Debian instead of an offset branch of Debian.
It could be Gentoo based or something for all we know at this point. Debian based makes the most sense Valve has worked with canonical a lot, however there is just no reason for them to build on Ubuntu. So I think that is what will be done, we do get some more info next week on the OS so maybe we will know then.
Could just be renamed version of that. Which is a modified Ubuntu for "gamers" (mostly just has WINE preinstalled and bunch bloat as well removed bloat...).
In Ubuntu it works flawlessly since at least 2006, when I began using it.
Does 6.10 even have a taskbar? I thought that was when they were brown and sorta tried to emulate Mac OS. (Also, seriously? Ubuntu?)
As far as we can compare Windows and Linux in this regard, Windows' one configuration against Linux' different distributions and desktop environments, anything that is not part of the as-delivered condition doesn't count as a feature of the operating system.
That's an arbitrary and selective limitation for the comparison, isn't it.
Otherwise you'd have to say that any application you can install subsequently is part of the OS, which makes any comparison between them pointless.
We're comparing usability features introduced within the two. Particularly, one of the most loudly crowed features about Linux is "Choice" where you can choose from all sorts of different software. To turn around ans say, "you know how Windows has that capability? No just ignore that pretend it doesn't exist... Now Linux Systems are better!"
Again, it's not about potential ability, but de facto implementation.
I agree. But I think you actually meant to use the term De Juere (Official) Rather than De Facto (not official but common enough to be a standard).
You're making assumptions without evidence. Apart from that, development stages do not count, only the actual releases to say "this OS had that feature at that time".
"development stages don't count" So, absolutely nothing Linux based counts because it's constantly in a state of development? You are LITERALLY just making up these special limitations specifically to account for the things you already know about Linux and to try to mask it's obvious faults.
You demanded features Linux had sooner than either Windows or MacOS, not necessarily both of
I may be a programmer but when I speak I use English. In English the word or is not exclusionary, it's distributive, depending on the context; the fact that you had to grammar nazi your way out of that one is pretty telling.
Excuse my mistakable wording. By "global" I meant system packages as well as applications. There are mainly two kinds of Linux distributions, release-based and rolling. The release-based don't install new versions but provide bugfixes and security patches. If you also want version updates, you simply take a rolling release distribution.
One of the problems with the package manager problem are two things. First you have The archipelago of repositories. Most distributions simply point to one or a few; Rolling releases, if they exist for the distribution (They don't exist for Ubuntu as far as I can tell) simply point to another repository that adds the next versions components, meaning it will pretty much be the same story as if you were to manually add the new version's repository to your /etc/apt/sources.list file. As long as the distribution remains relatively the same (eg just updating versions of each part, Gnome/etc) rather than completely changing up everything, it's fine, but that's the issue since every few versions they seem to completely rework everything again for some reason.
The other problem is autoconf, Which claims to help make software "portable" but it can only do that if you put a bunch of macros in the source that you could easily put there without autoconf, and it only ports between slightly different flavours. Yay we can compile this NetBSD program for OpenBSD. That's real portability there, moving to a slightly different flavour. It doesn't always work with Linux distributions because the program often has dependencies that cannot be resolves, and additionally the dependencies are usually considered versions specific for some reason (probably because they often are thanks the the Open Source environment at the encouragement of the FSF actively working against any sort of binary compatibility).
That's an image problem of Microsoft, not an intrinsic fault of package managing.
No, it's not really an image problem, it's a point that a central repo doesn't work for a company.
Package management is a proven, accepted tool in the Linux world, regularly praised for their usefulness and security (getting your software from only one supervised source).
Again: The one supervised source thing doesn't work for Windows because that would mean Microsoft and that would mean giving Microsoft control of a lot of the software that runs. They actually experimented using Windows Update back in a closed beta of one of the earlier OS's with Windows Update capability to allow other programs to register themselves and allow for updating through Windows Update. Very few vendors were willing to register for that for pretty obvious reasons, so they scrapped the ideas for the 'Windows Update Qualified Vendor' list. Microsoft still uses Windows Update for their own Programs like Office and stuff.
Also, in regards to "package management" Windows has had one since Windows 98- Windows Installer. It has all the information for installing, repairing, modifying, and uninstalling a product, much like package information. There is no central repository- because that would not be in the interest of the commercial software on the system that happens to compete with whoever is running the repository, and I don't think Microsoft would be keen on a so-called "neutral" party hosting the repository because in their case "neutral" means "openly hostile" much like the W3C. Windows Installer DOES have a capability to allow for checking for updates at the vendor site.
The main issue with Windows Installer is that it's only used for about half of the applications available; many applications use the other installer frameworks such as InstallShield or NSIS or even their own home-grown installer. These all work pretty good, the issue is that they don't all work the same as MSI and they don't keep all the information about a program in a central location- for management tasks; some of them support repairing and modifying, some don't, and they all implement them their own special way. And those installers that do use MSI usually don't use some of the more advanced features of the technology such as advertised features or update checking; instead preferring to have the latter as a feature within the application itself.
Frankly, I can't argue that, lacking the appropriate technical knowledge. I can only assume that's not that big problem since Linux servers all over the world are using this system.
At some point, Apache had a Critical security vulnerability that could basically allow people to perform SSL and other secure transfers without being authorized. The details aren't super important but apparently somebody ran the development tool "valgrind" on the source code, which said there was an uninitialized variable, so they changed it to set initialize to 0; apparently that was purposely not being initialized to add entropy to the TCP Sequence generator, so now it was 'trivial' for somebody who was experienced with TCP sequence prediction to piggyback on an existing SSL transfer and even hijack the session, which basically could mean full uncompromised access to the server itself.
Now, in this context, they of course released an update.
Most servers run Apache for a long time. Thing is, there were reports of breeches after they installed the update! How the heck? How did that happen? Because they were in fact running the previous version- they had updated but they weren't actually running it yet. Thankfully in this case finalizing the update required only updating Apache. Basically updating a program would mean re-starting that program, which makes sense. Problem is for an update you don't really know exactly what was updated. The only way ot make sure everything you are running currently is in fact up to date is to reboot after performing the update. Most Server installations are configured to do exactly this.
Windows is actually able to do a bit of updating without rebooting. It seems the problem is that the updates typically apply to components currently being used- specifically things like libraries (dlls) and the like. The WIndows Module loader locks a library file on disk because the file is mapped into memory into discardable pages; so the only way to update a given library is to make sure absolutely not programs are using it. So really there are two choices: Either avoid a restart by dismissively killing any process that is using the module, or simply set the files to be a pending file move at reboot and reboot the system. Usability wise I'd say the second is better, since it allows you to save your work. Imagine if instead of requiring an update, Windows Update forcibly quit almost all your applications. That would be a bit more of a pain in the ass, and really the end result would still be the same as rebooting. Rebooting is thus thge best usability alternative for performing an update.
Linux sort of halfway does it. It doesn't actually finish the update until you reboot, but let's you keep using the system. I don't remember if it actually says that a reboot will be needed to finish, but I think it just pretends everything is done. Thing is, you are still running the old versions- if you then launch one program you are already running again, you actually run the new version (if it was updated). The only way to actually finish the update with any degree of certainty is a reboot. I'm not really sure what decisions went into not making that reboot automatic, but most server update scripts typically finish off with a reboot for this reason. The problem is that while the system remains usable, it isn't actually updated- it just says it is.
Add to this that Linux is known for it's uptime, and you have an issue; a server that has been running for a year is running programs that are a year old regardless of what they've done, probably with a bunch of monkeypatched bits that are newer here and there... maybe apache was started with version X of a given library, and MySQL was started with Version 2.x of that same library, etc. This also has memory considerations since those versions stay in memory and do not share pages as they would if they were actually the same version. These same issues can crop up in terms of the consumer. This is a usability problem but it's also by design, because typically if you are using Linux you have some idea what you are doing. Even now there is a lot of assumptions being made about the expertise level of the user, and to be honest I'm fine with Linux sticking to what it's best at. Last thing I would want Linux to become is a shell of its former self. The issue is when those assumptions make things like 'non-updates' happen and don't actually say a reboot is needed, so the system just continues to run the old versions.
As for usability, at least there is nothing like the annoying behaviour of Windows, postponing certain updates until a system reboot, or forcing the reboot with a timer of x minutes you can only reset but not stop. So here for me Linux wins the usability match again.
See above: Linux requires a reboot to complete the updates with any amount of certainty; I've yet to actually see a warning about that. (Maybe it does, I don't recall). In the meantime you are running a funky skunkworks mix; if you launch applications they will bind to the new components, but everything currently running doesn't. (This can get fun if software interfaces with already running components but use two different versions of some other library they both use)
A common misconception in the Windows world, probably coming from the terrible excuse for a CLI the command.com was for a long time.
The only Command Interpreter Windows has had that is actually a Windows Program has been cmd.exe. Command.com is the 16-bit DOS executable which on 32-bit versions of windows will actually run under NTVDM. This is not a misconception.
Many tasks are done quicker and easier in the console than with a GUI.
usability is about discoverability. Those features are not discoverable- you have to read and understand the CLI. the fact is that typing error prone and non-intuitive commands is not better than a menu with checkboxes and dropdown lists, and it's only faster when you master the arcane commands, possibly forgotten in the sands of time. As an example apt-get install <x> requires that you know <x>; either that or you use apt-get cache search <X> and then maybe what the cache caches... Unless you know exactly what you are installing that isn't going to be faster than the UI package manager; and the only reason it's even an option is because the command-line tools are considered an "API" of sorts. Most UI programs on Linux are just a shell around a CLI. whereas on Windows both CLI and GUI programs are shells around a well-established API, which is accessible to both.
Same goes for quickly checking some system or file information.
That's possible with cmd as well... Or on Windows you can just press control+Break for the former or alt-Enter for the latter. It's not usable, it 's just faster because you've memorized it.
Batch processing also is a supreme discipline of the CLI. So, following your argument, Microsoft spent money and effort in a "60's" tool by developing the Windows Power Shell from 2006 up until today.
Yes. Powershell is a pile of pointless crap. It doesn't actually do anything particularly useful; first off, it tries to follow Bash, but it sticks some stupid .NET nonsense in there. People that know Bash don't know half of what is going on; and people that know .NET don't know half of what is going on, and even those that are fluent in both (myself included) still don't know what the hell is going on because none of the syntax for dealing with .NET scriptlets makes any sense. Half the time I start thinking I should do something in powershell, I just end up writing a one-off C# Program and compiling it on the spot for the same purpose. This is the same problem that BASH has. Why the hell do people even write shell scripts, given the availability of better options like Perl, Python, and even C.
As for the lost cause, adding an ad hominem fallacy will get you nowhere in a discussion.
It is true. Unix is not a usable system by design, and anybody assuming or arguing that there is any promise for usability in the system doesn't understand that design principle; the only way to make Unix or a Unix-based "usable" to the general population would mean compromising the very design principles upon which it is based.
Maybe you don't need this feature if you only tread in the Windows monoculture. In my environment there're users of Windows, MacOS, and Linux, even some BSD folks. Apart from that, I like to choose the best file system for a specific application.
I help run both our companies servers as well as the servers of our customers. Some of them run Windows, and some of them Run Linux; the only thing they all have in common is that on most sites they also have to Run VMWare to run a THEOS system to run the older System. some of the different systems are connected through SAMBA shares (where Linux is involved as the sharing system) which are accessed on the Windows systems by the in-progress Windows replacement for the THEOS back-end; additionally the Windows system directly accesses the data on the THEOS system through the THEOS network API for file access, in order to access the data that has not been moved to PostGres, which typically runs on another server as well Whether that one runs Linux or Windows depends on whether the customer has or is willing to go for more licenses; sometimes it all get's stuck onto a single machine with enough power to run several of the components required.
I don't exactly "tread in the windows monoculture" when I work professionally with not only Windows and Linux but something that I hope for your sake you've never heard of (THEOS). in addition to setting them up to work with one another. We've yet to ever require any special File-system support, since it's all networked and while the Linux systems typically use ext4 you don't need support on the other system to access a share on a ext4 partition on windows. If we need to figure out a problem with a hard drive we'd stick it into another system using a similar OS. We never do, however; they are set up for RAID so if a drive fails it get's written off and usually somebody springs at it and takes it home at their own risk, and we just stick in another one and rebuild the RAID. (oh for informational purposes: the Distributions aren't all the same, which I think is stupid, like they were just discovering distributions and used the servers as their playground- oh look there is Debian... and there is Xandros installed on some earlier sites. Now I think we're using CentOS when we need to run up new Linux servers.
All the important drivers are always integrated into the kernel, which requires a recompile to update as far as I'm aware. It's simply not something that it is worth taking a context switch from Ring 0 for. Definitely useful for things like Amazon cloud storage; though I think the shell extension framework makes more sense for those sorts of things, since they aren't really "filesystems".
The original topic was SteamOS.
I haven't been looking over this that much, Now it just looks like general Linux discussion...
Hey everyone, I'm back!
I heard from somewhere (Just a rumor!) that Canonical are forking the Kernal!
Could they be the ones? Either way Valve will just fork Ubuntu and get the free ride.
Hey everyone, I'm back!
I don't think any of the kernel level stuff is going to happen tbh.
Cononical already forked the kernel the Ubuntu kernel is modified slightly from the stock one.
SteamOS could be based on Ubuntu like mint or it could be based on just debian like Ubuntu.
It would likely be assumed they follow the master branch Debian instead of an offset branch of Debian.
It could be Gentoo based or something for all we know at this point. Debian based makes the most sense Valve has worked with canonical a lot, however there is just no reason for them to build on Ubuntu. So I think that is what will be done, we do get some more info next week on the OS so maybe we will know then.
Fixed it.
"Programmers never repeat themselves. They loop."
http://ultimateedition.info/
Could just be renamed version of that. Which is a modified Ubuntu for "gamers" (mostly just has WINE preinstalled and bunch bloat as well removed bloat...).
Does 6.10 even have a taskbar? I thought that was when they were brown and sorta tried to emulate Mac OS. (Also, seriously? Ubuntu?)
That's an arbitrary and selective limitation for the comparison, isn't it.
We're comparing usability features introduced within the two. Particularly, one of the most loudly crowed features about Linux is "Choice" where you can choose from all sorts of different software. To turn around ans say, "you know how Windows has that capability? No just ignore that pretend it doesn't exist... Now Linux Systems are better!"
I agree. But I think you actually meant to use the term De Juere (Official) Rather than De Facto (not official but common enough to be a standard).
"development stages don't count" So, absolutely nothing Linux based counts because it's constantly in a state of development? You are LITERALLY just making up these special limitations specifically to account for the things you already know about Linux and to try to mask it's obvious faults.
I may be a programmer but when I speak I use English. In English the word or is not exclusionary, it's distributive, depending on the context; the fact that you had to grammar nazi your way out of that one is pretty telling.
One of the problems with the package manager problem are two things. First you have The archipelago of repositories. Most distributions simply point to one or a few; Rolling releases, if they exist for the distribution (They don't exist for Ubuntu as far as I can tell) simply point to another repository that adds the next versions components, meaning it will pretty much be the same story as if you were to manually add the new version's repository to your /etc/apt/sources.list file. As long as the distribution remains relatively the same (eg just updating versions of each part, Gnome/etc) rather than completely changing up everything, it's fine, but that's the issue since every few versions they seem to completely rework everything again for some reason.
The other problem is autoconf, Which claims to help make software "portable" but it can only do that if you put a bunch of macros in the source that you could easily put there without autoconf, and it only ports between slightly different flavours. Yay we can compile this NetBSD program for OpenBSD. That's real portability there, moving to a slightly different flavour. It doesn't always work with Linux distributions because the program often has dependencies that cannot be resolves, and additionally the dependencies are usually considered versions specific for some reason (probably because they often are thanks the the Open Source environment at the encouragement of the FSF actively working against any sort of binary compatibility).
No, it's not really an image problem, it's a point that a central repo doesn't work for a company.
Again: The one supervised source thing doesn't work for Windows because that would mean Microsoft and that would mean giving Microsoft control of a lot of the software that runs. They actually experimented using Windows Update back in a closed beta of one of the earlier OS's with Windows Update capability to allow other programs to register themselves and allow for updating through Windows Update. Very few vendors were willing to register for that for pretty obvious reasons, so they scrapped the ideas for the 'Windows Update Qualified Vendor' list. Microsoft still uses Windows Update for their own Programs like Office and stuff.
Also, in regards to "package management" Windows has had one since Windows 98- Windows Installer. It has all the information for installing, repairing, modifying, and uninstalling a product, much like package information. There is no central repository- because that would not be in the interest of the commercial software on the system that happens to compete with whoever is running the repository, and I don't think Microsoft would be keen on a so-called "neutral" party hosting the repository because in their case "neutral" means "openly hostile" much like the W3C. Windows Installer DOES have a capability to allow for checking for updates at the vendor site.
The main issue with Windows Installer is that it's only used for about half of the applications available; many applications use the other installer frameworks such as InstallShield or NSIS or even their own home-grown installer. These all work pretty good, the issue is that they don't all work the same as MSI and they don't keep all the information about a program in a central location- for management tasks; some of them support repairing and modifying, some don't, and they all implement them their own special way. And those installers that do use MSI usually don't use some of the more advanced features of the technology such as advertised features or update checking; instead preferring to have the latter as a feature within the application itself.
At some point, Apache had a Critical security vulnerability that could basically allow people to perform SSL and other secure transfers without being authorized. The details aren't super important but apparently somebody ran the development tool "valgrind" on the source code, which said there was an uninitialized variable, so they changed it to set initialize to 0; apparently that was purposely not being initialized to add entropy to the TCP Sequence generator, so now it was 'trivial' for somebody who was experienced with TCP sequence prediction to piggyback on an existing SSL transfer and even hijack the session, which basically could mean full uncompromised access to the server itself.
Now, in this context, they of course released an update.
Most servers run Apache for a long time. Thing is, there were reports of breeches after they installed the update! How the heck? How did that happen? Because they were in fact running the previous version- they had updated but they weren't actually running it yet. Thankfully in this case finalizing the update required only updating Apache. Basically updating a program would mean re-starting that program, which makes sense. Problem is for an update you don't really know exactly what was updated. The only way ot make sure everything you are running currently is in fact up to date is to reboot after performing the update. Most Server installations are configured to do exactly this.
Windows is actually able to do a bit of updating without rebooting. It seems the problem is that the updates typically apply to components currently being used- specifically things like libraries (dlls) and the like. The WIndows Module loader locks a library file on disk because the file is mapped into memory into discardable pages; so the only way to update a given library is to make sure absolutely not programs are using it. So really there are two choices: Either avoid a restart by dismissively killing any process that is using the module, or simply set the files to be a pending file move at reboot and reboot the system. Usability wise I'd say the second is better, since it allows you to save your work. Imagine if instead of requiring an update, Windows Update forcibly quit almost all your applications. That would be a bit more of a pain in the ass, and really the end result would still be the same as rebooting. Rebooting is thus thge best usability alternative for performing an update.
Linux sort of halfway does it. It doesn't actually finish the update until you reboot, but let's you keep using the system. I don't remember if it actually says that a reboot will be needed to finish, but I think it just pretends everything is done. Thing is, you are still running the old versions- if you then launch one program you are already running again, you actually run the new version (if it was updated). The only way to actually finish the update with any degree of certainty is a reboot. I'm not really sure what decisions went into not making that reboot automatic, but most server update scripts typically finish off with a reboot for this reason. The problem is that while the system remains usable, it isn't actually updated- it just says it is.
Add to this that Linux is known for it's uptime, and you have an issue; a server that has been running for a year is running programs that are a year old regardless of what they've done, probably with a bunch of monkeypatched bits that are newer here and there... maybe apache was started with version X of a given library, and MySQL was started with Version 2.x of that same library, etc. This also has memory considerations since those versions stay in memory and do not share pages as they would if they were actually the same version. These same issues can crop up in terms of the consumer. This is a usability problem but it's also by design, because typically if you are using Linux you have some idea what you are doing. Even now there is a lot of assumptions being made about the expertise level of the user, and to be honest I'm fine with Linux sticking to what it's best at. Last thing I would want Linux to become is a shell of its former self. The issue is when those assumptions make things like 'non-updates' happen and don't actually say a reboot is needed, so the system just continues to run the old versions.
See above: Linux requires a reboot to complete the updates with any amount of certainty; I've yet to actually see a warning about that. (Maybe it does, I don't recall). In the meantime you are running a funky skunkworks mix; if you launch applications they will bind to the new components, but everything currently running doesn't. (This can get fun if software interfaces with already running components but use two different versions of some other library they both use)
The only Command Interpreter Windows has had that is actually a Windows Program has been cmd.exe. Command.com is the 16-bit DOS executable which on 32-bit versions of windows will actually run under NTVDM. This is not a misconception.
usability is about discoverability. Those features are not discoverable- you have to read and understand the CLI. the fact is that typing error prone and non-intuitive commands is not better than a menu with checkboxes and dropdown lists, and it's only faster when you master the arcane commands, possibly forgotten in the sands of time. As an example apt-get install <x> requires that you know <x>; either that or you use apt-get cache search <X> and then maybe what the cache caches... Unless you know exactly what you are installing that isn't going to be faster than the UI package manager; and the only reason it's even an option is because the command-line tools are considered an "API" of sorts. Most UI programs on Linux are just a shell around a CLI. whereas on Windows both CLI and GUI programs are shells around a well-established API, which is accessible to both.
That's possible with cmd as well... Or on Windows you can just press control+Break for the former or alt-Enter for the latter. It's not usable, it 's just faster because you've memorized it.
Yes. Powershell is a pile of pointless crap. It doesn't actually do anything particularly useful; first off, it tries to follow Bash, but it sticks some stupid .NET nonsense in there. People that know Bash don't know half of what is going on; and people that know .NET don't know half of what is going on, and even those that are fluent in both (myself included) still don't know what the hell is going on because none of the syntax for dealing with .NET scriptlets makes any sense. Half the time I start thinking I should do something in powershell, I just end up writing a one-off C# Program and compiling it on the spot for the same purpose. This is the same problem that BASH has. Why the hell do people even write shell scripts, given the availability of better options like Perl, Python, and even C.
It is true. Unix is not a usable system by design, and anybody assuming or arguing that there is any promise for usability in the system doesn't understand that design principle; the only way to make Unix or a Unix-based "usable" to the general population would mean compromising the very design principles upon which it is based.
I help run both our companies servers as well as the servers of our customers. Some of them run Windows, and some of them Run Linux; the only thing they all have in common is that on most sites they also have to Run VMWare to run a THEOS system to run the older System. some of the different systems are connected through SAMBA shares (where Linux is involved as the sharing system) which are accessed on the Windows systems by the in-progress Windows replacement for the THEOS back-end; additionally the Windows system directly accesses the data on the THEOS system through the THEOS network API for file access, in order to access the data that has not been moved to PostGres, which typically runs on another server as well Whether that one runs Linux or Windows depends on whether the customer has or is willing to go for more licenses; sometimes it all get's stuck onto a single machine with enough power to run several of the components required.
I don't exactly "tread in the windows monoculture" when I work professionally with not only Windows and Linux but something that I hope for your sake you've never heard of (THEOS). in addition to setting them up to work with one another. We've yet to ever require any special File-system support, since it's all networked and while the Linux systems typically use ext4 you don't need support on the other system to access a share on a ext4 partition on windows. If we need to figure out a problem with a hard drive we'd stick it into another system using a similar OS. We never do, however; they are set up for RAID so if a drive fails it get's written off and usually somebody springs at it and takes it home at their own risk, and we just stick in another one and rebuild the RAID. (oh for informational purposes: the Distributions aren't all the same, which I think is stupid, like they were just discovering distributions and used the servers as their playground- oh look there is Debian... and there is Xandros installed on some earlier sites. Now I think we're using CentOS when we need to run up new Linux servers.
Side note: THEOS is terrible.
All the important drivers are always integrated into the kernel, which requires a recompile to update as far as I'm aware. It's simply not something that it is worth taking a context switch from Ring 0 for. Definitely useful for things like Amazon cloud storage; though I think the shell extension framework makes more sense for those sorts of things, since they aren't really "filesystems".