Complexity In Computer Hardware

Old 6 Comments on Complexity In Computer Hardware 19

I was learning computer hardware and something hit me there has been advancement in memory over the years to try and make computers faster and more effective, this has led to creation of faster and faster memories. At first we were using Hard disk where data was loaded directly to the processor then come ram to increase the speed in which data reached the processor still this was not fast enough so, there come catches memory, faster than ram and more faster than hard disk access. Still this wasn’t enough so we come up with registers placed inside the processor.

The question is would it be better to remove the middle memories and replace them with registers. We could make huge registers of over 1 Gb., many would argue that this will be expensive, but how many people would prefer faster computer now..?


Related Articles


  1. Samuel Gwokuda March 6, 2014 at 1:48 pm

    intresting hypotheses, would love a computer with about 16Gigs of that super fast register memory

  2. Jerin Saji March 6, 2014 at 10:57 pm

    I am probably going to write a blog on this but I think you hit on the main point of why registers aren’t replacing memory. First, I am not sure if such high memory capacity is possible. Moreover, the research and Dev costs will be extremely high. If not that, the whole memory industry will just go away if we replace memory. The motherboard providers will start making memory on their own. Finally, the whole configuration thing that enthusiasts like is not going to be possible anymore.

  3. Aaron Babitzke March 27, 2014 at 8:44 am

    I think you need to look at how the programs access the registers of the CPU. The best would be to think of assembly language, they access them directly. If you keep adding mass amounts of registers you have to keep updating the programming languagaes, compilers, and assemblers to use them. You would also have to recompile your OS and software to be able to utilize these registers as well. From a software management perspective this would be a nightmare; recompiling that much software takes hours to days let alone rewriting compilers and assemblers.

    • Aaron Babitzke March 27, 2014 at 9:14 am

      If this were to happen it would have to bea set standard amount per aarchitecture (x86_64, arm, ect…)… We also have to look at the fact that there is still a ton of software that is 32bit, even on windows 7-64bit a lot of the command line utilities are 32bit apps. We have had 64bit processors in the industry for over 10 years and still haven’t been able to fully migrate over to 64bit software.

      • Ashley Meah June 29, 2014 at 8:26 pm

        Reason we still use 64-bit is because using more bits does not make anything faster, we can just add up bigger numbers what does not help much to make it worth it for now.

        a signed 64-bit inter max value is 9,223,372,036,854,775,807 – if we needed to use a bigger number we could even use a unsigned integer.

  4. Ashley Meah June 29, 2014 at 8:22 pm

    You guys are missing out the basic reasons for registers, time it takes to send electrons around. It would not be possible to fit that many semiconductors close enough together.

Leave a comment

Back to Top