ECC or NECC?
Megadeth4168
Member Posts: 2,157
in Off-Topic
Our GIS team has the budgest for 2 new computers. They have a budget of about 2100 per computer.
I don't really know much about the GIS software but I do know that these guys need more powerful machines than anyone else in the building.
I was wondering (given the nature of their work) about choosing the right type of memory for their needs.
In everyday machines I just go with NECC... Any thoughts?
Also, the GIS supervisor told me that one of their software packages is licensed per processor... He asked me if dual core processors would not allow the software to run then being that it might need 2 licenses ect...
I told him that I was sure the processor is still 1 processor and 1 serial number and that it should be fine. However it's always helpful to hear others thoughts on things.
Thanks
I don't really know much about the GIS software but I do know that these guys need more powerful machines than anyone else in the building.
I was wondering (given the nature of their work) about choosing the right type of memory for their needs.
In everyday machines I just go with NECC... Any thoughts?
Also, the GIS supervisor told me that one of their software packages is licensed per processor... He asked me if dual core processors would not allow the software to run then being that it might need 2 licenses ect...
I told him that I was sure the processor is still 1 processor and 1 serial number and that it should be fine. However it's always helpful to hear others thoughts on things.
Thanks
Comments
-
Judd Member Posts: 132As a rule of thumb, you would usually only use ECC in high-end servers as that technology tends to be more expensive. IE Rambus memory w/ECC for servers. Just get them the fastest memory that the FSB supports and perhaps max out the memory if affordable.
Intel D core, AMD dual core and Intel HTT processors all count as 1, there's only one ZIF socket. So yes, you are correct in that it shouldn't affect licensing.
http://www.microsoft.com/licensing/highlights/multicore.mspx -
Megadeth4168 Member Posts: 2,157Thanks for the link!
Hopefully the same turns out to be true with the GIS software licensing. -
TheShadow Member Posts: 1,057 ■■■■■■□□□□I disagree on not giving them ECC. I assume that they are doing mapping where an error could have disastrous affects. Anything dealing with money, life, or physical harm needs ECC even in a desktop workstation. The dollars are not great as even HP's $600 server boxes have it. Once upon a time even the lowest 386 had parity memory and now it is gone using a glue factory fire as part of the original reasoning. This vendor bs that was hoisted on the unsuspecting public 10 years ago is just a shame.
Microsoft finally realized from testing that many windows errors were really memory errors and is asking for ECC memory in Vista certified computers. At the 1 gig level it is estimated that one undetected error occurs each day of computer usage. Whether this error is fatal or not depends on what the system was doing. Most have become complacent about errors because it does not hurt non-critical work.
Just one persons opinion who worked on the designs of some of the first mainframes to use chip memory. I date back to 1103 1kx1 chips.
Who knows what evil lurks in the heart of technology?... The Shadow DO -
JDMurray Admin Posts: 13,088 AdminTheShadow wrote:Once upon a time even the lowest 386 had parity memory and now it is gone using a glue factory fire as part of the original reasoning. This vendor bs that was hoisted on the unsuspecting public 10 years ago is just a shame.
The IBM PC and clones have always used 7-bit, non-parity, DRAM memory. It was the Apple Macintosh that used 8-bit parity memory. If you know how parity works, it isn't a very reliable way of detecting errors in critical subsystems like RAM. If a memory parity error was detected, it could not be corrected, and MS-DOS would simply abend with a "Memory parity error detected. System halted" message. In my personal experience, memory parity errors were extreme rare, and usually indicated a bad or over-heated DRAM chip. -
TheShadow Member Posts: 1,057 ■■■■■■□□□□Sorry JD it is very rare, but in this case I don't agree with your assessment as I am looking at a genuine IBM PC-AT with piggy back memory chips with early serial number. Piggy backs were used because higher density chips did not exist for early deployment systems. AST 6pack cards also installed chips 9 at a time with 1 bit per byte for parity.
Using parity bits for ECC did not become economically feasible until 8 bytes per memory word came into being because of the hamming code distance routines used. Believe me I am extremely familiar with error correction routines having worked on the first one in 1971. I can demonstrate it without notes. My first career was hardware design both discrete and gate array.
Just what model of IBM PC only had 7 bit bytes? Even today, grab the engineering design manual for any north bridge and you will find 8 bit bytes. After all Dimms come in 64 and 72 bit flavors; do the math. Are you sure that you are not thinking of data communication, which in fact only used 7 bits plus even, odd or no parity? CRC became popular when no parity binary became the norm, pushed by Xmodem and its descendent's. Discovered that putting RBBS-PC on the first Osborne's with some JPL buds.
When you have 32 Megabytes of RAM memory errors are statistically small. When you have 1 to 4 Gigabytes, alpha particle bombardment alone make them statistically significant. Please note that your CPU cache uses ECC and it is much smaller in size. Microsoft Vista will soon determine the future path.Who knows what evil lurks in the heart of technology?... The Shadow DO -
JDMurray Admin Posts: 13,088 AdminThe IBM PC needed to use an extra DRAM chip for a parity bit. This was optional on clones that I've worked with that allowed memory parity checking to be disabled. It wasn't optional on the PS/2, and I don't remember on the original PC/XT/AT. I'm only talking about DRAM systems here.
And a "byte" is the number of bits in the character set of the operating system. ASCII systems have 7-bit bytes; ANSI/PC-8/EBCDIC systems have an 8-bit byte. The CDC CYBER systems had a 12-bit byte, and Unicode is a 16-bit byte. This has nothing to do with whether or not memory uses parity. IBM got away from this confusion by adopting the term "octet" to specifically refer to an "8-bit byte." -
Megadeth4168 Member Posts: 2,157I don't have the machines on order yet... So for mapping ect... You think I should go with ECC then? It's easily within our budget.
-
TheShadow Member Posts: 1,057 ■■■■■■□□□□jdmurray wrote:The IBM PC needed to use an extra DRAM chip for a parity bit. This was optional on clones that I've worked with that allowed memory parity checking to be disabled. It wasn't optional on the PS/2, and I don't remember on the original PC/XT/AT. I'm only talking about DRAM systems here.
And a "byte" is the number of bits in the character set of the operating system. ASCII systems have 7-bit bytes; ANSI/PC-8/EBCDIC systems have an 8-bit byte. The CDC CYBER systems had a 12-bit byte, and Unicode is a 16-bit byte. This has nothing to do with whether or not memory uses parity. IBM got away from this confusion by adopting the term "octet" to specifically refer to an "8-bit byte."
I do not see the point on disabling, every server can disable ECC now. I believe the PS2 maintenance disk allowed you to disable parity but I don't have one anymore to check. My point remains that the extra bit was still a 9th bit and no an 8th one.
I see that we are now talking cross purposes here; the traditional Hardware person versus software person argument. I understand both sides of that argument having done several Mloc's of commercial C/C++ and who knows how much of half a dozen different assembly's (Career number two ).
The point still remains that byte as used by hardware OEM's settled on 8 bits 20 years ago and octet was settled on to prevent misunderstandings. IBM, Burroughs and Sperry settled on it long before that when 8 bit EBCDIC became a standard mainframe code with CDC being odd person out. I believe RCA had an odd ball bit count also. Time marches on, I gave up rocking chair arguments long ago. That came when my spouse said I had to quit carving my bits from stone. I can no longer purchase equipment in anything other than 8 bit bytes (or octets if you prefer) with or without parity/ECC
I think that with a little googling you will find that IBM, Compaq, HP, Unisys, Dell, Tandy all had 8 data bits plus 1 parity bit as did most clones until after the fire and Intel stopped being the sole supplier for PCI chip sets. You would be hard pressed to find a 30 pin SIMM that did not have 9 chips (1 bit per chip per address cycle) until just before they stopped making them. Look at a 184 pin DDR DIMM of any brand and you will find 8 chips with 8 bits out per address cycle and 9 chips if it is an ECC module . The spec sheets are available on-line from most of the major DRAM makers. A final point, pick up your assembly manual and explain AX BX CX and DX broken up into 8 bit chunks or Debug **** spewing out hex values as two 4 bit nybbles, or Amazing Grace and that filthy word Cobol with packed numbers.Who knows what evil lurks in the heart of technology?... The Shadow DO -
TheShadow Member Posts: 1,057 ■■■■■■□□□□Megadeth4168 wrote:I don't have the machines on order yet... So for mapping ect... You think I should go with ECC then? It's easily within our budget.
Yes!
Today after trying for the third time to reinstall XP on a system and getting unable to copy the same DLL from the CD to disk, from different original CD's; I swapped the position of two different 512Meg sticks. The fourth try said a different DLL could not be copied. I replaced both sticks and system installed and is on the network grabbing the applications. ECC would have detected the problem the first time. The system was reinstalled because of various unknown problems and the user was sure it must have been a virus or a corrupt install. A most timely confirmation for me considering this thread. Now I have to go kick myself and put my button back on that says "No I will not fix your computer go call somebody"Who knows what evil lurks in the heart of technology?... The Shadow DO -
Judd Member Posts: 132In the decade or so that I've dealt with desktop PC's, I can count on one hand how many times I've encounter a genuine memory error. 3/5 were a result of ESD.
I suppose that because it isn't the 2-3-4 GB sticks that it is reasonable and couldn't hurt to put in ECC. For at least peace of mind.
What a great discussion this has been though! -
TheShadow Member Posts: 1,057 ■■■■■■□□□□generally bad memory is destroyed before or during installation by ESD. It can take upward of 48 months for the damage to cause system failures. Higher density memory uses smaller wires therefore exacerbating the problem and shrinking the failure appearance window.Who knows what evil lurks in the heart of technology?... The Shadow DO