Sunday, February 25, 2018

Digital Design - Chapter 1 part1

Supercomputers, which at the present consist of tens of thousands of
processors and many terabytes of memory, and cost tens to hundreds of millions
of dollars. Supercomputers are usually used for high-end scientific and engineering
calculations, such as weather forecasting, oil exploration, protein structure
determination, and other large-scale problems.  


Embedded computers are the most significant class of machines and span the broadest
range of applications and performance. Embedded computers include the
microprocessors found in your car, the computers in a television set, and the
networks of processors that control a modern airplane or cargo ship.  
PC is the personal mobile device (PMD).
Taking over from the traditional server is Cloud Computing, which relies upon
large data centers that are now known as Warehouse Scale Computers (WSCs).
Companies like Amazon and Google build these WSCs containing 100,000 servers
and then let companies rent portions of them so that they can provide software
services to PMDs without having to build WSCs of their own. Indeed, Sofware as
A Service (SaaS) deployed via the cloud is revolutionizing the software industry just
as PMDs and WSCs are revolutionizing the hardware industry. Today’s software
developers will often have a portion of their application that runs on the PMD and
a portion that runs in the Cloud.  
Use Abstraction to Simplify Design → Moore’s law
Make the Common Case Fast  → Sports car VS sports minivan!
Performance via Parallelism → We use multiple jet engines of a plane as our icon for
parallel performance
Performance via Pipelining → sequence of pipes
Performance via Prediction  → fortune-tellers crystal ball icon.
Hierarchy of Memories  → fast cache on top and rest on the bottom.
Dependability via Redundancy  → when hardware fail system will continue
systems software was sitting between the hardware and applications software.
There are many types of systems software, but two types of systems software
are central to every computer system today: an operating system and a compiler.
An operating system interfaces between a user’s program and the hardware
Moreover, provides a variety of services and supervisory functions. Among the most
important functions are:
■ Handling primary input and output operations
■ Allocating storage and memory
■ Providing for protected sharing of the computer among multiple applications
using it simultaneously.  
Compiler: A program that translates high-level language statements into assembly language statements. Binary
digit: Also called a bit. One of the two numbers in base 2 (0 or 1) that are the components
of information.
instruction: A command that computer hardware understands and obeys.
Assembler: A program that translates a symbolic version of instructions into the binary version.
assembly language: A symbolic representation of machine instructions.
machine language: A binary representation of machine instructions.
The five classic components of a computer are input, output, memory,
datapath, and control, with the last two sometimes combined and called
the processor. Figure 1.5 shows the standard organization of a computer.
Tis organization is independent of hardware technology: you can place
every piece of every computer, past and present, into one of these five
categories. To help you keep all this in perspective, the five components of
a computer are shown on the front page of each of the following chapters,
with the portion of interest to that chapter highlighted.  
Skipped --------
Performance :
response time:  Also called execution time. The total time required for the computer to complete a task,
including disk accesses, memory accesses, I/O activities, operating system, overhead, CPU execution, time, and so on.
Throughput: Also called bandwidth. Another measure of performance, it is the number of tasks
completed per unit time.  
1. Replacing the processor in a computer with a faster version?
2. Adding additional processors to a system that uses multiple processors for separate tasks—
for example, searching the web?
Decreasing response time almost always improves throughput. Hence, in case
1, both response time and throughput are improved. In case 2, no one task gets
work done faster, so only throughput increases.  
If, however, the demand for processing in the second case was almost
as large as the throughput, the system might force requests to queue up. In
this case, increasing the throughput could also improve response time, since
it would reduce the waiting time in the queue. Tus, in many real computer
systems, changing either execution time or throughput often affects the other.  
Ex:
CPU execution time: Also called CPU time. The actual time the CPU spends computing for a specific task.
user CPU time: The CPU time spent in a program itself.
system CPU time The CPU time spent in the operating system performing tasks on
behalf of the program.
§1.6, page 33: 1. a: both, b: latency, c: neither. 7 seconds.  



No comments:

Digital Design Part 3

4th→ assembler translates it to the machine language. 1.6 [20] <§1.6> Consider two different implementations of the same instru...