CPU Architecture

by Apr 13, 2021

Intel CPU Architecture

CPU Architecture

Moving Away from CISC & RISC

Apple is taking RISC architecture to higher and higher performances not even thought capable of RISC prior to the iPhone. It is now competing with desktop class systems. And now I just read where Nvidia is getting into CPUs. But Intel can not go RISC as it does not own that platform. However, CISC does not have what mobile users need. So where do we go?

I posit Intel take a page out of Apple’s playbook and bring forth a new system built from the ground up and transition away from RISC. An instruction set that AMD will not be licensed to use. One that is capable of mobile, desktop, and server requirements. General purpose, productivity, gaming, scientific, and more.

Modular Instruction Set Computer (MISC)

Modular design consisting of a bottom wafer that houses the I/O, cache, and power management. On top of the I/O wafer is the core wafer. The core wafer has ‘core package’ that includes the processing core(s) instruction set. Core packages for productivity and general, Gaming, Scientific/Data, VR/AR, AI, Environmental (gyro, altimeter, audio, visual, etc.)

This will allow Intel these advantages.

  1. Each core package can be highly tuned for specific workloads keeping the instruction set simple and efficient for that workload
  2. Engineers can work on core packages independently of the overall system. Teams dedicated to scientific core design. One for AI, and so forth. This gives them the freedom to design for a very specific purpose thus getting the most out of performance and efficiencies
  3. Not only will you have SKUs for specific workloads, but you will be able to build custom chips for enterprise clients by combining any configuration of core packages
  4. Keeping with your i5, i7, i9 lineup, you can have configurations such as i5VR tuned as a lower end VR kit and i9VR at the high-end. ixVR, ixAI, ixSI (scientific), ixGA (gaming), ixPR (productivity/general purpose)

Core Packages

Instead of general pipelines that have to do everything, you have an array of configurations. You may have an adder core package that has gates going around in a square for continuous counting because all they do is count. In the center (attached to the bottom I/O wafer) is the controller that puts computations onto the adder wheel (core) and pulls them off up to 4 times per cycle. Short add requests, it may only go around once, longer adds stay on the wheel as long as it needs. Feed the adder wheel with multiple maths. Have multiple adder wheels as needed for the workload. Controller puts the answer back onto the I/O wafer buss and pulls in the next request. Super tuned just for adding. Like the old Math Co-processors back in the day. 

The AI core package would be designed in a neural network mesh and an instruction set just for AI. Don’t need AI, then you save the space from the core and instructions. You may have a simple AI core and simple instruction set and a robust AI core and robust instruction set depending on application. Again, space taken where needed and saved if not needed.

Scientific and Data CPUs could be configured for more cache for lager calculations. Possibilities are endless. Manufacturing is straightforward as you take the needed cores out of the parts bin and assemble any configuration.


  1. Simplification of instruction sets. You have more of them, but they are simpler, easier and quicker to run.
  2. Cores are highly tuned for specific workloads. You do not have a pipeline that has to be a one size for all configuration.
  3. Reduced power/heat from #2.
  4. No Performance Cores and Efficiency Cores. They are now one in the same. High performance and efficient.


  1. Core packages allow dedicated engineering teams to work on designing to specific needs/problems. Including the I/O wafer. It will be simpler to tune a core package for better AI performance than it will be for general cores for AI while not breaking anything else. Releases can focus on the core packages updated in that cycle.
  2. As a bolt on system (almost) you bin the chips, then assemble onto the I/O wafer depending on production demand. In down time, you might run some scientific cores and bin them for later use. You don’t need the entire fab taken for a single use case as demand changes. Just add parts onto the core wafer to fill orders.
  3. keep the instruction set as simple and tight for each core type. The design of the cores and the layout of the CPU gets very complicated, however, the processing gets simpler, thus reducing compiling overhead, and energy trying to parse complex instructions.
  4. The I/O wafer might grow into the I/O/S wafer for Input/Output/Security. The security can replace the southbridge security completely.


There are of course risks that need mitigation. I will outline the ones I see and the mitigations below.


The largest risk is not owning the OS and therefore you will need Microsoft’s assistance building and tuning for the new architecture. However, this would give Microsoft the chance to sell a new OS with translator for your new architecture. Their OS, their office suite, their browser all optimized at launch. Collaboration with Adobe as mentioned above.

Zero Core Package

A user might have purchased a general pupose CPU with little or no AI capability. Perhaps a general purpose instruction set on the I/O wafer that is only used in the rare events someon is trying to use the system out of normal configuration. Or every SKU has a base configuration. Some research on best base option would need to be conducted.


This page has been archived.


The use of this solution is prohibited without express written permission.

Ready to Start a Project?

I'm Available

Getz Pro


Every good gift and every perfect gift is from above, and comes down from the Father of lights, with whom there is no variation or shadow of turning.

James 1:17