KXStudio I’m sure is good though I never use it, and I’m glad there are distributions tailored for specific purposes, as that helps to accomodate what the general distros are lacking – and that is an all in one solution to have realtime apps work under one roof. The mainstream distros do not come installed by default with realtime or low-latency kernel builds – and the caveat of cross-referencing repos was a point I was making, but I was contemplating along the goal lines of KXStudio keeping its builds for its own project because the reality is if a project like KXStudio started doing this (publishing their builds for distro X), they would be getting a lot of bugreports from users who by default would not be using a realtime kernel(which could be the essence of the issue they are having). – it was this point I was trying to insinuate about why it is good to have distros maintained for a specific purpose, it would be surprising to me if KXstudio starting doing this, along with any other distro tailored for the same purpose.
… all in hindsight of course is that it is completely at the risk of the user who would already know what they’re getting into when mixing repos.
[Edit: I never mentioned KXStudio would severely damage a system. Perhaps that may have resonated by interpretation – but the worst damage that I can see happening is a program or a system crash. Nothing as “severely” damaging the system because you are not sourcing experimental libc system call libraries which can. Unless you are actually sourcing system core libraries such as experimental/testing libc you definitely can cause severe damage. But you’re not sourcing to testing and dangerous libraries like this so you have less to worry about. 
– it’s ok I’m not going to debate anything. I’m not saying you can or cannot do these things, it’s just a general consensus and user community reminder for any potential user out there who may not be familiar about… just an exercise of reminder to take note of risks.
the worst case is a stall, but that is guaranteed if the user is not mixing testing/experimental repositories with unstable system core libraries. The likelyhood that you guys are not doing this is highly unlikely as you’re all experienced enough to know the difference between the different branches of repositories for the popular distributions.
cheers
]
– as for ntfs (more specically the ntfs-3g project – which is userland filesystem meaning it is not part of the linux kernel project), yes it is much more stable and safe than exFAT, but the company supporting it (tuxera) does not bother to optimize it for speed as I suppose that would compete with their own licensed product lines. I don’t recall how much is reverse-engineered with ntfs-3g but tuxera would have the full specs to ntfs by MS’ permission…
The problem though with ntfs-3g is that it is slow on linux. The plus is that it is more safe, and also that it supports files greater than 4GB(unlike vfat/fat32). If you’ve got large files to work with and they’re stored on ntfs, you will be hitting road-blocks in performance but it is safer to use than the current state with exfat.
@Aleph
PcLinuxOS is a good choice, it uses the stable srpms from the work of fedora/redhat, and that is a formula for stable package things and that I think is a good thing. 