Monday, July 2, 2007

Group Assignment

History of Intel



Around 1978-1979 Intel took the next step up by introducing the 8086 processor. This was one of the earlier 16-bit processors. IBM decided to us it.
Around 1984-1985 Intel introduced the 80286 which added more sophisticated memory management.
In 1987 the 80386 became available. This was a 32 bit word machine with real memory management (paging)
In 1990, the 80486 arrived sporting an integrated floating point unit and higher performance.
The Pentium was introduced around 1992-93.








Intel was founded on July 18, 1968 with one main goal in mind: to make semiconductor memory more practicle. Intels first microprocessor, the 4004 microcomputer, was released at the end of 1971. The chip was smaller then a thumbnail, contained 2300 transistors, and was capable of executing 60,000 operations in one second. Shortly after the release of th 4004 the 8008 microcomputer was released and was capable of executing twice as many operations per second then the 4004. Intels commitment to the microprocessor led to IBM's choice of Intel's 8088 chip for the CPU of the its first PC. In 1982, Intel introduced the first 286 chip, it contained 134,000 transistors and provided around three times the performance of the other microprocessors at the time. In 1989 the 486 processor was released that contained 1.2 million transistors and the first built in math coprocessor. The chip was approximately 50 times faster then Intels original 4004 processor and equaled the performance of a powerful mainframe computer. In 1993 Intel introduced the Pentium processor, which was five times as fast as the 486, it contained 3.1 million transistors, and was capable of 90 million instructions per second (MIPS). In 1995 Intel introduced its new technology, MMX, MMX was designed to enhance the computers multimedia performance. Throughout the years that followed Intel released several lines of processors including the Celeron, the P2, P3, and P4. Intel processors now reach speeds upwards of 2200 MHZ or 2.2 GHZ.









Origin of the name





At its founding, Gordon Moore and Robert Noyce wanted to name their new company "Moore Noyce". This name, however, sounded remarkably similar to "more noise" — an ill-suited name for an electronics company, since noise is typically associated with bad interference. They then used the name NM Electronics for almost a year, before deciding to call their company INTegrated ELectronics or "Intel" for short. However, Intel was already trademarked by a hotel chain, so they had to buy the rights for that name at the beginning











Founders of Intel









Intel was founded in 1968 by Gordon E. Moore (a chemist and physicist) and Robert Noyce (a physicist and co-inventor of the integrated circuit) when they left Fairchild Semiconductor. A number of other Fairchild employees also went on to participate in other Silicon Valley companies. Intel's fourth employee was Andy Grove (a chemical engineer), who ran the company through much of the 1980s and the high-growth 1990s. Grove is now remembered as the company's key business and strategic leader. By the end of the 1990s, Intel was one of the largest and most successful businesses in the world, though fierce competition within the semiconductor industry has since diminished its position.











Intel headquarters in Santa Clara



Moore's Law


Moore's Law is the empirical observation made in 1965 that the number of transistors on an integrated circuit for minimum component cost doubles every 24 months.[1][2] It is attributed to Gordon E. Moore (born 1929),[3] a co-founder of Intel. Although it is sometimes quoted as every 18 months, Intel's official Moore's Law page, as well as an interview with Gordon Moore himself, state that it is every two years.


Earliest forms


The term Moore's Law was coined by Carver Mead around 1970.[4] Moore's original statement can be found in his publication "Cramming more components onto integrated circuits", Electronics Magazine 19 April 1965:


Under the assumption that chip "complexity" is proportional to the number of transistors, regardless of what they do, the law has largely held the test of time to date. However, one could argue that the per-transistor complexity is less in large RAM cache arrays than in execution units. From this perspective, the validity of one formulation of Moore's Law may be more questionable.
Gordon Moore's observation was not named a "law" by Moore himself, but by the
Caltech professor, VLSI pioneer, and entrepreneur Carver Mead.[2] Moore, indicating that it cannot be sustained indefinitely, has since observed "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens."[5]
Moore may have heard Douglas Engelbart, a co-inventor of today's mechanical computer mouse, discuss the projected downscaling of integrated circuit size in a 1960 lecture.[6] In 1975, Moore projected a doubling only every two years. He is adamant that he himself never said "every 18 months", but that is how it has been quoted. The SEMATECH roadmap follows a 24 month cycle


Understanding Moore's Law
Moore's law is not about just the density of transistors that can be achieved, but about the density of transistors at which the cost per transistor is the lowest
[1]. As more transistors are made on a chip the cost to make each transistor reduces but the chance that the chip will not work due to a defect rises. If the rising cost of discarded non working chips is balanced against the reducing cost per transistor of larger chips, then as Moore observed in 1965 there is a number of transistors or complexity at which "a minimum cost" is achieved. He further observed that as transistors were made smaller through advances in photolithography this number would increase "a rate of roughly a factor of two per year".[1]


Formulations of Moore's Law

PC hard disk capacity (in GB). The plot is logarithmic, so the fit line corresponds to exponential growth.
The most popular formulation is of the doubling of the number of
transistors on integrated circuits every 18 months. At the end of the 1970s, Moore's Law became known as the limit for the number of transistors on the most complex chips. However, it is also common to cite Moore's Law to refer to the rapidly continuing advance in computing power per unit cost, because increase in transistor count is also a rough measure of computer processing power. On this basis, the power of computers per unit cost - or more colloquially, "bangs per buck" - doubles every 24 months (or, equivalently, increases 32-fold in 10 years).


Amdahl's Law


Amdahl's law, named after computer architect Gene Amdahl, is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.


Amdahl's Law is a law governing the speedup of using parallel processors on a problem, versus using only one serial processor. Before we examine Amdahl's Law, we should gain a better understanding of what is meant by speedup.
Speedup:
The speed of a program is the time it takes the program to excecute. This could be measured in any increment of time. Speedup is defined as the time it takes a program to execute in serial (with one processor) divided by the time it takes to execute in parallel (with many processors). The formula for speedup is:


T(1)
S=-------------
T(j)



Where T(j) is the time it takes to execute the program when using j processors. Efficiency is the speedup, divided by the number of processors used. This is an important factor to consider. Due to the cost of multiprocessor super computers, a company wants to get the most bang for their dollar.
To explore speedup more, we shall do a bit of analysis. If there are N workers working on a project, we may assume that they would be able to do a job in 1/N time of one worker working alone. Now, if we assume the strictly serial part of the program is performed in B*T(1) time, then the strictly parallel part is performed in ((1-B)*T(1)) / N time. With some substitution and number manipulation, we get the formula for speedup as:



N
S = -----------------------
(B*N)+(1-B)
N = processors
B = % of algorithm that is serial



This formula is known as Amdahl's Law. The following is a quote from Gene Amdahl in 1967:
For over a decade prophets have voiced the contention that the organization of a single computer has reached its limits and that truly significant advances can be made only by interconnection of a multiplicity of computers in such a manner as to permit co-operative solution...The nature of this overhead (in parallelism) appears to be sequential so that it is unlikely to be amenable to parallel processing techniques. Overhead alone would then place an upper limit on throughput of five to seven times the sequential processing rate, even if the housekeeping were done in a separate processor...At any point in time it is difficult to foresee how the previous bottlenecks in a sequential computer will be effectively overcome.


Members:


Kenneth P. Magcalayo


Genesis V. Madriaga


Andie R. Pason


Haris D. Kambang


April John C. Olaveja

No comments: