cost should never be a factor...if its a hobby to build and make faster computers....SSD are HDD killers period...lol
You know what the really expensive ones go for, right? Could buy a few STi's with that kind of money.
And we are talking about electronic devices, not just CPUs. What would you say the service life of an intel processor is supposed to be? And no, I'm not butthurt, I just don't like being told opinions that conflict with fact. And the fact that you started your pointed response to my post by saying "not to sound like an ass..." Any time you have to say something like that, it pretty much tells the person you are saying it to that what you are about to say is going to be full of self satisfaction and know it all opinions. And yes, this experience I have is coming from military electronics and medical electronics. We also use intel processors in our equipment. And sadly our equipment is usually shut off daily, sometimes (more often than not) more than one time per day. And what boards typically fail the most? The intel processor boards. Thermomechanical fatigue isn't some smoke and mirrors thing I'm making up, it is real and proven. Electronics exposed to thermal cycling tend to fatigue due to the differing thermal coefficients of the materials they are made with. That's not just a fact, it's based on physics. Intel doesn't build their processors to crap out after a few years. They build them for longevity. Because it isn't just gamers and internet surfers buying their stuff.
I'd like to go back and revisit this one too...specifically the "Processors kinda just stop working somewhat mysteriously unless resoureces are expended to determine the specific cause." I'm going to go back to my previous statement that solidstate devices fail for two main reasons...overheating or internal shifting. The internal shifting is caused by thermal stresses since that is the main stress applied to a semiconductor. So, if it isn't over heating, then the differing materials in a semiconductor have been stressed out of tolerance...and with heat being the main source of fatigue for electronics, it is safe to say that thermal fatigue is what caused it. Solid state devices don't just randomly rearrange themselves on their own, they are made to work one way and one way only, if the internals shift, then they don't work the way they were intended to work anymore. And I'd think that intel, more than anyone else, would like to know why a processor that they market is failing. What is your experience with CPUs or electronics in general and their failure rates?
k, so we kinda missed each other on CPU's vs. electronics more generally. I absolutely agree with you that thermal cycles are death for electronics, just that I don't think it is anywhere near as much when considering CPU's specifically. In fact, had I to choose which components were most affected I would say first the power supply and second the motherboard. Interesting quote. I wonder how much variety there is within a large die when it comes to thermal expansion. Touche. I suppose I kinda did mean to sound a bit like an ass on that particular point. And I was saying if anyone it would be Intel who would do that sort of testing since, as you said, they would be the ones to care and also because they would have the resources as well.
Sorry man, didn't mean to get all nasty on that, I just don't like being called out on electronics. I've seen way too many electronic devices fail due to heat, and since the cooling systems on the equipment I've worked on have either been liquid based or hugely overbuilt, I know that cycling is the other cause. The "not to sound like an ass" thing just kinda rubbed me the wrong way. Hopefully you don't think I'm too big of an ass for my responses.
I don't mind a little interwebz back and forths - I'm not trying to be nasty either. I do have a fair amount of experience in the computer and (though less so) electronics industry (15 yrs). Work more in software now, but you never are far from hardware. Its also my hobby and passion and has been since I was in 5th grade. So, I kinda jump into technical discussions with a bit of fervor.
I think we're all in agreement that heat and stupidity are the #1 killers of PC components. I'm testing an HP storage blade with SSDs for use as an ESX host. So far with multiple concurrent users the RAID 5 SAS storage blade had better latency, performance, and load times. However when I changed the SSD to host only the VMs and had user diff files on a RAM-disk I did see marked improvements over the SSD alone. Granted the SSD blade uses something with slightly better performance than a consumer SSD but there is still a ways to go before I'll be happy with the price/performance with them.
^ now that's the kind of SSD use I'm interested in. We run some ESX stuff at work - but its small virtual web/mail servers on older SCSI RAID arrays and a fibre channel array.
alright most interested techies! looks like tonight's the night that the SSD comes! this thing will have so much power that it would demand that "YOU MUST CONSTRUCT ADDITIONAL PYLONS!" <--- honestly don't know why i said that..damn starcraft Lol! anyways yeah it comes tonight shall be posting the vids from t3h monstar Peecee
leaving a computer always on it's a waste of energy. Computers also emit radiations, a small quantity, but still. I always powered it off every night and my hardware failure is nowhere above the average. As a matter of fact, I only used warranty once and it was not a device failure. readymix, thanks for explaining why MTask and Mthread require a lot of disk access. Even if you have a ton of RAM, you still have to load all you need in memory.
Multithreading and multitasking in and of themselves require 0 disk accesses. It's what exactly those threads and tasks are doing that may or may not require disk accesses. I can create 1000 threads and send them all spinning using every last ounce the CPU has to offer and never once access the hard drive. And despite my devil's avocating - I'm actually very interested to hear your impressions of that SSD once the PC's running, scoobypwnz.
Just to beat the dead horse.... As part of our reliability testing, we perform both Temperature Cycling (-55 <-> +150C!) and HTOL (High Temp Operating Life). As a fact, TC is only a 'package' test and has no effect on the silicon (the chip). In other words, temp cycling is not a test that will find defects in a chip or accelerate them, it will highlight issues with packaging (bonding, different thermal coefficients, etc). The fastest way to accelerate the life of a chip is temperature and voltage, period. We run our preamps at 150C die temp at 10% over-voltage for 6 weeks to simulate 10yrs of normal operating life in a HDD. However, when we spec lifetimes for HDDs, we have to expect 24/7 operation due to server farms, DVRs, etc. That says that the design (at least for a HDD) is to operate 24/7. Most other components you find in a PC are the same as the manufacturer cannot control the 'on time' of their component. So, the reality is that for ICs, heat is the killer, and thermal cycling is a killer for mechanical interfaces. In the end, everything has both (you have to connect the IC to something else). I'll leave it at that... Christian (as a Product Engineer, part of my job is being responsible that our products meet reliability requirements)
well im waiting for my friend to get out of class!!! so i can install this thing lmao we will have videos posted around 10 or so!