A discussion with André LaMothe on multi-core programming, future
computer technology, and artificial intelligence...
Until about the end of 2005 the race for faster computers and game
systems was driven mainly by increases in the clock frequency of the
processor. This largely determined the number of instructions that
were processed per second, but with higher switching frequencies
came higher temperatures, and a thermal barrier was reached above 3
GHz. To continue the progress in computing power chip manufacturers
are beginning to multiply the number of processing cores rather than
increasing the speed of a single core.
André LaMothe, computer scientist and author of many books on the
programming topic, has recently designed a console system to teach
the fundamentals of multi-core development. He has also agreed to
discuss this topic with us as well as go a little further into the
future and theoretical possibilities of computer science.A. Well, multi-core and multiprocessing has been around as long as
computers have been actually, it's just a matter of cost/performance.
Each generation has easily been able to double clock speeds and go
to smaller sizes, but we are starting to hit the wall with silicon,
thus chip developers have to really get serious about multi-core
designs. The idea is simple, instead of doubling the clock speed you
double the number of processing elements and you can do twice the
computation. Of course, this is only theoretical. The problem is
that when you have multiple cores then "someone" is responsible for
keeping them busy; meaning that the code that is executing has to be
compiled to do it, or the programmer has to do it manually.
Multi-core programming has been around actually for years on the PC
platform, it's just that most programmers didn't know it. The 486
even had two cores: a U and V pipeline that could each perform a
computation if you ordered the instructions correctly - a crude form
of multiprocessing. So more or less, instead of increasing clock
speed by a factor of N, you increased the number of processors by a
factor of N and distribute the work. There is no limit on this
really, but the problem is distributing the work and then potentially
re-assembling the results.
Q1. How exactly does multi-core pick up where increased clock speed
Q2. What does this mean to current programmers and those who will be
entering the computer science profession?
A. Not much, as I mentioned
multi-core programming has been around for a long time. Most
programmers programming on a Power-PC, Pentium with multi-cores,
hyper threading, and so forth, don't even know they are since the
compiler and processors handle the problem for the most part.
However, for performance programmers it means a lot. It means that
if they manually take control of these machines they can get them to
do quite a bit more than the compilers otherwise will be able to.
Additionally, it will "help" programmers to know about
multiprocessing to program. Nonetheless, realistically it won't
matter. I can make a good analogy: The majority of web programmers
don't know much at all about programming, data structures,
algorithms, AI, compiler theory, yet they can "program". In other
words, they are surrounded by so much supporting technology:
"<HTML>Hello World</HTML>" causes about 1,000,000 lines of code to
execute from the server, the browser, and finally the
renderer. So I
doubt that people will even know multi-core programming or need to
learn it since the compilers will handle it. However, people will
know that more is better, and more cores are better, so it will be
more of a marketing indicator. But, for the hardcore
performance enthusiasts, it will be a whole new opportunity.
Q3. Starting at the drawing board, how do programmers need to
begin their design process differently in order to maximize the
A. First, "parallel programming" isn't for everything. New
programmers to the concept tend to get carried away and the gains
they make in parallel programming are lost by "re-assembly". Thus,
the idea is to find processes in your program that are either: A.
totally asynchronous and then run them each on a processor, or B. if
you can find separable problems (graphics are a good example), then
the question is, "in breaking the problem into parts, and solving a piece
on each processor, can you then re-assemble the results faster than
just doing the whole thing on a single processor?" That is, there is
"overhead" in the separation and re-assembly of the solution, so you
have to watch out. Additionally, if processes are going to talk to
each other, or share resources, then synchronization becomes a big
problem. For example, if processor A has to wait for B has to wait
for C, then there is no parallelism, thus, you need to make sure that
processes (on each processor) can run as independently as possible.
Q4. What potential headaches and consequences does this add to the
already difficult process of software development?
A. Taking full
advantage of parallelism is number one. It's relatively hard to just
take a program and break it up into parallel tasks. Also, if a
programmer does so, then instead of having a single processor
running a single thread of execution now he has 2, 3, 10, 100
processors or more doing things, and the simple act of debugging
things becomes much more complex as well as the tools needed. So I think
that multiprocessor programming is going to have the same trouble it
always has had: writing the code, getting it to work, debugging it.
This will be remedied more and more with parallel compilers of
Q5. How critical is it to "balance" software threads, and how do you
prevent some cores from sitting idle and wasting valuable hardware?
A. If you are talking about software threads running on a single
processor, then not very critical. The operating system is going to
schedule your threads anyway, so it's not going to waste that much
since you can't hog the system. However, if we drill down to a
single thread then the same goals apply to multiple threads; keep
the thread busy, this means NO WAIT LOOPS, if you need to wait on
something, then do some work that you know takes less time, do the
work, then wait a little bit, rather than a lot. Simply keep the
processor thread working.
Q6. How much further do you see multi-core technology taking us and
what are its limitations?
A. I think that we are going to see a number
of technologies all aggregate in the next 10 years. New (old
actually) semiconductor materials that are faster will be used,
spintronic processor cores will start to be available,
but certainly there will be more and more processors in each
generation. But, like I alluded to, this is not new stuff. There have
been many, many companies with multiprocessors and multi-cores out
there. Graphics cards are the best example. They have been doing it
for 15 years. So, I just think that our degree of freedom with clock
speeds is starting to wane, thus multiple cores are going to just
have to be there. Thus, I think that you will see 16-64 cores in
next generation processors, then 64-256 and so forth, it will happen
like clock speeds and data bus widths did.
Q7. What do you see as the next technological step beyond today’s
multi-core on silicon?
A. Spintronics and
quantum computing are the
next frontiers without a doubt. Moving electrons around is brute force,
but like internal combustion, it works, is easy to understand, everyone
does it, but at some point things have to change. So I would say
8-12 years spintronic processors will start to come out, and in 10-15
the first commercial quantum processors that "do something" will
start to hit the streets.
Q8. In 1965 Gordon Moore commented in an article in "Electronics
Magazine" that processor complexity seemed to double about every 18
months. This seems to have been the benchmark that the chip industry
has been chasing. Do you think that Moore’s Law has driven
advancement, or do you think it has provided a psychological "speed
limit" that may have stunted innovation?
A. I think Moore's law has
nothing to do with advancement. I think that it's more or less
obvious when you finish something how to make it 2x as fast or
small, or more. It's just that to get everyone together and do it
takes about a year or two, so it's really just a good insight into the
"engineering process" and how long a revision takes place in the
real world by a master engineer.
Q9. Artificial Intelligence seems to be the ultimate "brass ring"
for computer scientists. Beginning with basic logic functions and
ending with the replication of human consciousness, how far along
this journey do you believe we currently are with our most
A. This is a hard question. We have the
computing power, but so far just copying an ant is beyond our reach. I
think the real problem is knowing how to approach this problem, and
it's something I think about a lot. We just have to have a radical
paradigm shift in AI to make some good ground. We are inching toward
it right now, but computer scientists have to back up from the
problem and look at it in a much more biological way, and model it
more biologically both constructionally and programmatically. Then it might be a matter of building systems that can become
sentient, and THEN understand how they work. For example, a 6502
with 256 bytes of memory can represent 256^256 programs. We have
never explored all of them, nonetheless, we all have played with
6502's. Thus, self programming and learning are the keys to this; we
need to build a system with the tools to learn and let it go from
there, then try to understand the results. Time for this to
happen... 20-30 years before we have any really good AIs that are
general in their solution domain. But more and more we are going to
see AIs applied to things like driving, etc.
Q10. The Turing Test – a test in which a computer can carry on a
human-like conversation – seems to be the most widely accepted
method to determine when we reach AI. Do you have your own idea for
this type of test?
A. I think this test is as ridiculous as an IQ test.
This is from half a century ago. I can have a conversation with half
the teenagers in this country and I would think they are computers
:) So, I think the Turing test is too domain specific. I think that
a test for intelligence should be a battery of thousands of
questions and problems you can ask any age group, and if the results
are indistinguishable, then it's as good as it needs to be, but this
has nothing to do with whether it's "alive" or sentient. This is a hard
thing to put your finger on. Souls and religion aside if you believe
or not. But, what is self-awareness? No one really knows. But, I
know one thing, if we can copy it, we can understand it. And I
guarantee that if you take a human being and replace every single
neuron with a synthetic neuron, one by one, that person will still
be that person. So consciousness, I believe, is just pure
information. It's a pattern, not physical in nature.
Q11. What do you think is the ultimate purpose that computers can
serve in our lives?
A. Truthfully....to replace us. Evolution and
natural selection for the human race is at a standstill, so the only
way for us to evolve really is for augmentation and re-design by
ourselves. But, in the next 100 years, I think that computers will
continue to be everywhere to perform work, and AI will help with
jobs that no one wants, or that are too dangerous, or to create
artificial friends, playmates, babysitters, pets, teachers, police,
explorers, and so forth.
In addition to being the author of many books
on computer and computer game programming, Andre LaMothe is CEO of
Nurve Networks LLC. His company designs and manufactures
do-it-yourself computer gaming consoles that are an excellent way to
learn the important fundamentals of digital electronics and computer
programming. They're also a lot of fun and allow you to play real
video games that you design and write yourself. Nurve's latest
console, the Hydra, employs a processor chip with multiple
processing cores, and is a great way to gain skills in multi-core
programming. Visit these sites to