Cafe logo
A virtual coffeehouse for technological minds and ideas
Skip Navigation Links
electricalfun.comExpand electricalfun.com
ElectronCafé.com
WorkbenchFun.com
video
about usExpand about us
the Butcher Shop - Kindle book

 

Intel's Nick Knupffer comments on
high performance computing and silicon photonics

If you gave a futurist a workshop to actually build their ideas they would often have to revise their predictions. It’s easy for imagination to skate past the stubborn details and technical difficulties that impede progress. Engineers have done remarkable work overcoming those things thought, and I find it exciting that people can have the equivalent of a 1980s or ‘90s supercomputer sitting on their desk or lap.

Microprocessors are certainly the most significant technological step towards machine autonomy. In his book “The Microprocessor: A Biography” tech journalist Michael Malone calls this device the Rosetta Stone of our culture. That’s a big statement to make but most likely accurate. It’s given us the ability to code human knowledge and decision making into what had previously been just a calculating device.

It’s been a long evolutionary journey from the Intel 4004 (the first microprocessor) to present multi-core processors, and there are two frontiers now that are particularly fascinating: high-performance computing and silicon photonics. I posed some questions to Intel’s Nick Knupffer on these two topics and these are the comments he shared:

Q. ‘High performance computing’ is a relative term. The Cray-1 was considered a supercomputer in the 1970s but can’t compare to one of today’s video game consoles. How would you define ‘high performance computing’ at this moment?

A. Most high performance computing applications have some characteristic of parallelism and data size that grows with time. Essentially they drive compute performance to the limits possible of the day. Where a system with a thousand processors was the peak of the art a few years ago, systems with 30,000 processors are built today. HPC customers tend to say “great... then I will increase the resolution of my model.. add in more data, simulate with shorter time steps” and expand the amount they compute to match the increased scaling of the system.

HPC is a subset of “technical computation”. Customers computing to achieve a technical purpose. Usually, simulation, analysis, design and visualization are part of this technical arc. HPC customers use groups of individual computers working together on a common problem concurrently to solve the problem.

Q. Is high performance computing considered mainly for businesses and institutions, or does it extend into the consumer market?

A. There are three good examples of consumer HPC. First is Immersive multi player games. Compute is distributed between many computers Sharing a common simulation problem. Second is a potential 3D content generation application, third is mobile augmented reality.

Q. What form do you see it taking in the consumer market?

A. Cloud computers providing HPC capability for short run jobs. For example, a customer takes 3D video and wants to augment it with animation, and does cloud based rendering. The compute is not in the device or computer local to the consumer, it’s in the cloud. Mobile augmented reality also has elements like this.

Q. You’re still making significant architectural advancements in the microprocessor core but the trend has been to increase the number of cores on the chip in order to accelerate processing. It seems that CPU clock speed topped out between 3 and 4 GHz. Did it become a heat problem or did the switching speed of silicon transistors start to max. out?

A. There is nothing magic about 3 or 4Ghz. The issue is the scaling of performance with frequency. If I improve system clock speed by 10% I increase performance by 10%. Cool.. But I’ve got a smaller chip, what do I do with the extra area? If I drop the frequency a tiny bit, say 10%, I can increase the number of cores by 33% and get a net of 27% performance increase with approximately the same power envelope. Our ability to make small cores that do everything we want is what makes core scaling and core improvement (like AVX) a better area to focus on that increasing frequency. For now.

Q. First the graphical user interface, then multimedia applications, drove demand for faster processors. With your current product lineup, it seems that the CPU is well out ahead of the software demands. Is it mostly competitive pressures that are driving CPU advancements now or is my perception about the software demands incorrect?

A. Software demands continue to drive Performance expectations in Technical computing for sure. Customers doing simulation and analysis need more compute every year. Software demands are also driving increased performance required in handheld devices, smartphones and the like. In PC’s we are entering an age of autonomic computing, tons of stuff is going on with your PC that you cant see, but is protecting your interests: virus scan, visualization, backup, predictive search are all running to make your experience more enjoyable. These things all continue to move PC performance requirements up.

Q. I understand that a processor development cycle is about two years, from design to production. What are your product engineers looking ahead at as far as likely applications and the processing requirements of those applications?

A. Actually it is closer to 4 or 5 years. If you want to get an idea of the types of applications we are looking at, you can check out this website: http://www.intel.com/go/terascale/

Q. A while back you demonstrated a technology you call “Light Peak”, which was described as a "souped-up version of the USB connection", and brings optical transmission directly to the chip. Would this be analogous to “fiber to the home” broadband in that it extends the benefits of optical data transmission closer to the source?

A. Not really. Lightpeak is designed to connect the PC to external devices and can run several protocols at the same time (USB, HDMI, PCI-Express etc…), using light or copper. But eventually, it is conceivable that our silicon photonics research will be able to do what you are describing. http://techresearch.intel.com/ResearchAreaDetails.aspx?Id=26

Q. Silicon photonics research is at the heart of this technology and seems to be an important part of Intel’s R&D. Is this technology research limited to optical transmission to and within the chip, or do you envision replacing the transistor logic as well with an optical equivalent?

A. Never say never, but that isn’t yet part of the research program.

Q. Do you see any promise in quantum processors yet or is the technology too sketchy to hazard any kind of prediction?

A. No predictions, sorry!

 

 

 


electricalfun.com, ElectronCafé.com, WorkbenchFun.com, and the electricalfun channel are trademarks of ElectricalFun Media
Copyright © 2024