Monthly Archives: Agustus 2013
What is a Computer?
In its most basic form a computer is any device which aids humans in performing various kinds of computations or calculations. In that respect the earliest computer was the abacus, used to perform basic arithmetic operations.
Every computer supports some form of input, processing, and output. This is less obvious on a primitive device such as the abacus where input, output and processing are simply the act of moving the pebbles into new positions, seeing the changed positions, and counting. Regardless, this is what computing is all about, in a nutshell. We input information, the computer processes it according to its basic logic or the program currently running, and outputs the results.
Modern computers do this electronically, which enables them to perform a vastly greater number of calculations or computations in less time. Despite the fact that we currently use computers to process images, sound, text and other non-numerical forms of data, all of it depends on nothing more than basic numerical calculations. Graphics, sound etc. are merely abstractions of the numbers being crunched within the machine; in digital computers these are the ones and zeros, representing electrical on and off states, and endless combinations of those.Â In other words every image, every sound, and every word have a corresponding binary code.
While abacus may have technically been the first computer most people today associate the word â€œcomputerâ€ with electronic computers which were invented in the last century, and have evolved into modern computers we know of today.
First Generation Computers (1940s â€“ 1950s)
First electronic computers used vacuum tubes, and they were huge and complex. The first general purpose electronic computer was the ENIAC (Electronic Numerical Integrator And Computer). It was digital, although it didnâ€™t operate with binary code, and was reprogrammable to solve a complete range of computing problems. It was programmed using plugboards and switches, supporting input from an IBM card reader, and output to an IBM card punch. It took up 167 square meters, weighed 27 tons, and consuming 150 kilowatts of power. It used thousands of vacuum tubes, crystal diodes, relays, resistors, and capacitors.
The first non-general purpose computer was ABC (Atanasoffâ€“Berry Computer), and other similar computers of this era included german Z3, ten British Colossus computers, LEO, Harvard Mark I, and UNIVAC.
Second Generation Computers (1955 â€“ 1960)
The second generation of computers came about thanks to the invention of the transistor, which then started replacing vacuum tubes in computer design. Transistor computers consumed far less power, produced far less heat, and were much smaller compared to the first generation, albeit still big by todayâ€™s standards.
The first transistor computer was created at the University of Manchester in 1953. The most popular of transistor computers was IBM 1401. IBM also created the first disk drive in 1956, the IBM 350 RAMAC.
Third Generation Computers (1960s)
The invention of the integrated circuits (ICs), also known as microchips, paved the way for computers as we know them today. Making circuits out of single pieces of silicon, which is a semiconductor, allowed them to be much smaller and more practical to produce. This also started the ongoing process of integrating an ever larger number of transistors onto a single microchip. During the sixties microchips started making their way into computers, but the process was gradual, and second generation of computers still held on.
First appeared minicomputers, first of which were still based on non-microchip transistors, and later versions of which were hybrids, being based on both transistors and microchips, such as IBMâ€™s System/360. They were much smaller, and cheaper than first and second generation of computers, also known as mainframes. Minicomputers can be seen as a bridge between mainframes and microcomputers, which came later as the proliferation of microchips in computers grew.
Fourth Generation Computers (1971 â€“ present)
First microchips-based central processing units consisted of multiple microchips for different CPU components. The drive for ever greater integration and miniaturization led towards single-chip CPUs, where all of the necessary CPU components were put onto a single microchip, called a microprocessor. The first single-chip CPU, or a microprocessor, was Intel 4004.
The advent of the microprocessor spawned the evolution of the microcomputers, the kind that would eventually become personal computers that we are familiar with today.
First Generation of Microcomputers (1971 â€“ 1976)
First microcomputers were a weird bunch. They often came in kits, and many were essentially just boxes with lights and switches, usable only to engineers and hobbyists whom could understand binary code. Some, however, did come with a keyboard and/or a monitor, bearing somewhat more resemblance to modern computers.
It is arguable which of the early microcomputers could be called a first. CTC Datapoint 2200 is one candidate, although it actually didnâ€™t contain a microprocessor (being based on a multi-chip CPU design instead), and wasnâ€™t meant to be a standalone computer, but merely a terminal for the mainframes. The reason some might consider it a first microcomputer is because it could be used as a de-facto standalone computer, it was small enough, and its multi-chip CPU architecture actually became a basis for the x86 architecture later used in IBM PC and its descendants. Plus, it even came with a keyboard and a monitor, an exception in those days.
However, if we are looking for the first microcomputer that came with a proper microprocessor, was meant to be a standalone computer, and didnâ€™t come as a kit then it would be Micral N, which used Intel 8008 microprocessor.
Popular early microcomputers which did come in kits include MOS Technology KIM-1, Altair 8800, and Apple I. Altair 8800 in particular spawned a large following among the hobbyists, and is considered the spark that started the microcomputer revolution, as these hobbyists went on to found companies centered around personal computing, such as Microsoft, and Apple.
Second Generation Microcomputers (1977 â€“ present)
As microcomputers continued to evolve they became easier to operate, making them accessible to a larger audience. They typically came with a keyboard and a monitor, or could be easily connected to a TV, and they supported visual representation of text and numbers on the screen.
In other words, lights and switches were replaced by screens and keyboards, and the necessity to understand binary code was diminished as they increasingly came with programs that could be used by issuing more easily understandable commands. Famous early examples of such computers include Commodore PET, Apple II, and in the 80s the IBM PC.
The nature of the underlying electronic components didnâ€™t change between these computers and modern computers we know of today, but what did change was the number of circuits that could be put onto a single microchip. Intelâ€™s co-founder Gordon Moore predicted the doubling of the number of transistor on a single chip every two years, which became known as â€œMooreâ€™s Lawâ€, and this trend has roughly held for over 30 years thanks to advancing manufacturing processes and microprocessor designs.
The consequence was a predictable exponential increase in processing power that could be put into a smaller package, which had a direct effect on the possible form factors as well as applications of modern computers, which is what most of the forthcoming paradigm shifting innovations in computing were about.
Graphical User Interface (GUI)
Possibly the most significant of those shifts was the invention of the graphical user interface, and the mouse as a way of controlling it. Doug Engelbart and his team at the Stanford Research Lab developed the first mouse, and a graphical user interface, demonstrated in 1968. They were just a few years short of the beginning of the personal computer revolution sparked by the Altair 8800 so their idea didnâ€™t take hold.
Instead it was picked up and improved upon by researchers at the Xerox PARC research center, which in 1973 developed Xerox Alto, the first computer with a mouse-driven GUI. It never became a commercial product, however, as Xerox management wasnâ€™t ready to dive into the computer market and didnâ€™t see the potential of what they had early enough.
It took Steve Jobs negotiating a stocks deal with Xerox in exchange for a tour of their research center to finally bring the user friendly graphical user interface, as well as the mouse, to the masses. Steve Jobs was shown what Xerox PARC team had developed, and directed Apple to improve upon it. In 1984 Apple introduced the Macintosh, the first mass-market computer with a graphical user interface and a mouse.
Microsoft later caught on and produced Windows, and the historic competition between the two companies started, resulting in improvements to the graphical user interface to this day.
Meanwhile IBM was dominating the PC market with their IBM PC, and Microsoft was riding on their coat tails by being the one to produce and sell the operating system for the IBM PC known as â€œDOSâ€ or â€œDisk Operating Systemâ€. Macintosh, with its graphical user interface, was meant to dislodge IBMâ€™s dominance, but Microsoft made this more difficult with their PC-compatible Windows operating system with its own GUI.
As it turned out the idea of a laptop-like portable computer existed even before it was possible to create one, and it was developed at Xerox PARC by Alan Kay whom called it the Dynabook and intended it for children. The first portable computer that was created was the Xerox Notetaker, but only 10 were produced.
The first laptop that was commercialized was Osborne 1 in 1981, with a small 5â€³ CRT monitor and a keyboard that sits inside of the lid when closed. It ran CP/M (the OS that Microsoft bought and based DOS on). Later portable computers included Bondwell 2 released in 1985, also running CP/M, which was among the first with a hinge-mounted LCD display. Compaq Portable was the first IBM PC compatible computer, and it ran MS-DOS, but was less portable than Bondwell 2. Other examples of early portable computers included Epson HX-20, GRiD compass, Dulmont Magnum, Kyotronic 85, Commodore SX-64, IBM PC Convertible, Toshiba T1100, T1000, and T1200 etc.
The first portable computers which resemble modern laptops in features were Appleâ€™s Powerbooks, which first introduced a built-in trackball, and later a trackpad and optional color LCD screens. IBMâ€™s ThinkPad was largely inspired by Powerbookâ€™s design, and the evolution of the two led to laptops and notebook computers as we know them. Powerbooks were eventually replaced by modern MacBook Proâ€™s.
Of course, much of the evolution of portable computers was enabled by the evolution of microprocessors, LCD displays, battery technology and so on. This evolution ultimately allowed computers even smaller and more portable than laptops, such as PDAs, tablets, and smartphones.
Bonjour! Now you can say it with pride since Narotama University opens French language Course this September 1st 2013! Â Free for Â 15 Narotama University students who pass the selection. As for you who do not bothered by the selection, feel free to join the course, IDR 1,5 Million for 3,5 months reaching A1 level of French Language Mastery, include course book and CD plus a Â year membership of Mediatek, French library at Institut Francais Surabaya! The course is open for both narotama UNiversity students and public!
No more waiting, just register yourself to NLC and reap the benefit!Â
Contact Us :
1. Ani 081515819001
2. Qausya 0817368983
It might be hard to imagine, but Sanur was once at the very forefront of the Balinese tourist industry. It was, in the early days, the premier resort on an idyllic island paradise and a playground for the rich and famous.
Sanur has seen many superstars, including Mick Jagger, wandering its streets and frequenting its bars. His monochrome photographs, along with hundreds of other stars, adorn hotel and restaurant walls throughout the town and serve as a reminder of the golden days lost in the passage of time.
Today, Bali still attracts many modern-day glitterati, but they tend toward the seclusion of lavish villas or the opulence of Nusa Dua hotels and, over time, Sanur has drifted into quiet and relative obscurity.
For those who live and work there, many consider this a desirable feature when viewed alongside the endless traffic of Ubud or life in the chaos of Kuta.
This can also be said of the tourists who sustain Sanur. Many are return visitors attracted by the calm, the safe streets and, of course, its reasonable prices. Sanur has, over recent years, inadvertently perhaps, positioned itself in the safe middle ground and while it might not have the visitor numbers some other resorts enjoy, or a single major tourist draw, judging by the longevity of many of its restaurants and hotels it clearly remains a viable financial commitment.
Sanur, unlike some places, is also quite unique, in that despite its tourism it retains a very Balinese look and feel. In hotels, spas and restaurants the staff are predominantly Balinese and often local. In turn, the lack of external influence means that daily Balinese life is on constant and open display.
The beach, for example, is a hub of ceremonial activity, the streets full of Balinese architecture and at the right times overflowing with religious symbolism, and almost every street has a warung (street stall) dedicated to the mass production of offerings. In many other resort areas this is simply not the case and the genuine Balinese influence has been displaced or severely watered down.
Although much of the Sanur beachfront is in good condition, there are several excellent regeneration projects underway. The most notable being the work at Mertasari Beach, where the once ramshackle huts used by sellers and warung are now replaced by sturdy, well built, permanent structures. Part of the scheme also sees locals taking responsibility for the cleanliness of the area and sands, and the overall impact has been a marked improvement in order and cleanliness; it is most welcome.
However, Sanur is also seeing quite a lot of large-scale development and several substantial hotel and villa complexes have recently been completed or are in the construction process. This is, of course, despite Governor Pastikaâ€™s moratorium on such developments.
The real issue with this style of regeneration is not the buildings themselves, or the employment they bring, although how some meet the regulations should raise questions, but the lack of existing infrastructure they are built around. The likelihood is there will be a detrimental impact on the surrounding dwellings and businesses as saturation of the existing roads, drains, water and electricity occurs.
The roads are simply unable to cope with a significant increase in traffic and here Sanur risks losing one of its key benefits. More hotels require more water, most, if not all, will use well water, thus placing further stress on an already strained supply.
So, in summary, while regeneration and improvements are generally welcomed, many of the existing tranche may actually have a negative impact on the local economy, the environment, pricing and the quality of life in general that is today Sanurâ€™s biggest asset.
White beaches and safe seas: Built in the 1990s with sponsorship from the Japanese government to stop beach erosion, the breakwaters with open pavilions were added along with thousands of tons of sand to regenerate the beaches.
Building without infrastructure: If the tourists do come in large numbers to fill the new hotels then much of the attraction of Sanur as a sleepy alternative runs the risk of being lost. Land prices, already high, will increase and local residents may well be forced out, just as in other parts of the island.