Java Programming

Introduction
A computer is a device capable of performing computations and making logical decisions at very high speeds (millions or even billions of times faster than human beings). Most of today's personal computers can perform a billion additions per second. A person operating a desk calculator could spend an entire lifetime performing calculations and still not complete as many calculations as those that a powerful personal computer can perform in one second. A supercomputer can perform hundreds of billions of additions per second. And trillion-instructions-per-second computers are already functioning in research laboratories around the world. Computers process data under the control of sets of instructions called programs. These programs guide the computer through orderly sets of instructions to perform actions specified by people called computer programmers. A computer consists of various devices referred to as hardware (e.g., the keyboard, monitor, mouse, hard disks, memory, DVD, CD-ROM and processing units). The programs that run on a computer are referred to as software. Early computers could only perform one task (job) at a time. This is referred to as single-user batch processing. The computer runs a single program at a time while processing data in groups or batches. In such cases, users generally submitted their jobs to a computer center on decks of punched cards and often had to wait for hours or even days before the printouts were returned to their desks. Software systems (system softwares) called operating systems were developed to make using computers more convenient. The early operating systems easiened and sped up the transition between jobs, and hence increased the amount of work, or throughput, computers could process. As computers became more powerful, single-user batch processing became inefficient, because so much time was spent waiting for slow input/output devices to complete tasks. It necessitated the need for many jobs or tasks to share the resources of the computer to achieve better utilization. This is concept involves Multiprocessing and multiprogramming. Multiprogramming involves running more than one program at the same time where as multiprocessing is the simultaneous operation of many jobs that are competing to share the computer's resources. With early multiprogramming operating systems, users still submitted jobs on decks of punched cards and waited hours or days for results. In the 1960s, several groups in industry and the universities pioneered time-sharing operating systems. Time-sharing is a special case of multiprocessing / multiprogramming in which users access the computer through terminals, typically devices with keyboards and screens. Dozens or even hundreds of users share the computer at once. The computer actually does not run them all simultaneously. Rather, it runs a small portion of one user's job, then moves on to service the next user, perhaps providing service to each user several times per second. Thus, the users' programs appear to be running simultaneously. An advantage of time-sharing is that user requests receive almost immediate responses and a user is not kept waiting for long.

Programmers write instructions in various programming languages, some language codes directly understandable by computers and others requiring intermediate translation steps. Hundreds of computer languages are in use today. These may be divided into three general types: Machine languages Assembly languages High-level languages Any computer can directly understand only its own machine language. Machine language is the "natural language" of a computer and as such is defined by its hardware design. Machine languages generally consist of strings of numbers (1s and 0s) that instruct computers to perform their most elementary operations one at a time. Machine languages are machine / platform dependent (i.e., a particular machine language can be used on only one type of computer). Such languages are cumbersome for humans, as illustrated by the following section of an early machine-language program that adds allowance pay to base pay and stores the result in gross pay: +1400042774      +1300593419       +1200374027 Machine-language programming was simply too slow and comple (tedious) for most programmers. Instead of using the strings of numbers that computers could directly understand, programmers came up with a language that used English-like abbreviations to represent elementary operations. These abbreviations formed the basis of assembly languages. Translator programs called assemblers were developed to convert early assembly-language programs to machine language at computer speeds. The following section of an assembly-language program also adds overtime pay to base pay and stores the result in gross pay: load    basepay add     allowance store   grosspay Although such code is clearer to humans, it is not understood by computers until translated to machine language (by an assembler). Computer usage increased rapidly with the advent of assembly languages since it was simpler, but programmers still had to use many instructions to accomplish even the simplest tasks. To speed the programming process, high-level languages were developed in which single statements could be written to accomplish substantial tasks. Translator programs called compilers convert high-level language programs into machine language. High-level languages allow programmers to write instructions that look almost like everyday English and contain commonly used mathematical notations. A payroll program written in a high-level language might contain a statement such as      grossSalary = baseSlary + overTime These made high-level languages to be more preferable as compared to machine and assembly language from the programmer's standpoint. C, C++, Microsoft's .NET languages (e.g., Visual Basic .NET, Visual C++ .NET and C#) and Java are among the most widely used high-level programming languages. The process of compiling a high-level language program into machine language can take a considerable amount of computer time. Interpreter programs were developed to execute high-level language programs directly, although much more slowly. Interpreters are popular in program-development environments in which new features are being added and errors corrected. Once a program is fully developed, a compiled version can be produced to run most efficiently.

Computer components
A computer can be seen as comprising of 5 basic units:

Input unit:: This is the "acquiring" section of the computer. It obtains information (data and computer programs) from input devices and places this information at the disposal of the other units so that it can be manipulated. Most information is entered into computers through keyboards and mouse devices. Information also can be entered in many other ways, including by voice input, by scanning and via a network, such as the Internet.

Output unit: This is the "shipping" section of the computer. It takes information that the computer has processed and places it on various output devices to make the information available for use outside the computer. Most information output from computers today is displayed on screens, printed on paper or used to control other devices. Computers also can output their information to networks, such as the Internet.

Memory unit: This is the rapid-access, relatively low-capacity "warehouse" section of the computer. It retains information that has been entered through the input unit, so that it will be immediately available for processing when needed. The memory unit also retains processed information until it can be placed on output devices by the output unit. Information in the memory unit is typically lost when the computer's power is turned off. The memory unit is often called either memory or primary memory.

Central processing unit (CPU): This is the "administrative" section of the computer. It coordinates and supervises the operation of the other sections. The CPU tells the input unit when information should be read into the memory unit, tells the ALU when information from the memory unit should be used in calculations and tells the output unit when to send information from the memory unit to certain output devices. Many of today's computers have multiple CPUs and, hence, can perform many operations simultaneously, such computers are called multiprocessors. The processor is comprised of two units, namely: the ALU and the CU.

 Arithmetic and logic unit (ALU): This is the "factory" section of the computer. It is responsible for performing calculations, such as addition, subtraction, multiplication and division. It contains the decision mechanisms that allow the computer, to perform logical operations like comparing two items from the memory unit and determine whether they are equal. Control unit (CU): This is the management part of the computer. The control unit manages, coordinates and controls the operations of the computer and devices connected to the computer.

Secondary storage unit: This is the long-term, high-capacity "backup storage" section of the computer. Programs or data not actively being used by the other units are normally placed on secondary storage devices (your hard disk or flash disk) until they are again needed, hours, days, months or even years later. Information on secondary storage takes much longer time to access than information in main memory, but the cost per unit of secondary storage is much less than that of primary memory. Examples of secondary storage devices include CDs and DVDs, these can hold up to hundreds of millions of characters and billions of characters, respectively.

Early computers could only perform one job or task at a time. This is referred to as single-user batch processing. The in such systems, the computer runs a single program at a time while processing data in groups or batches. In these early computer systems, users had to submit their jobs to a computer center on decks of punched cards and had to often wait for hours or even days before printouts were returned to their desks. Software systems called operating systems were developed to make the use of computers to be more convenient. These early operating systems smoothed and sped up the transition between jobs, and hence increased the amount of work, or throughput that computers could process.

As technology became more advanced and computers became more powerful, it became apparent that single-user batch processing was inefficient, because so much time was spent while waiting for slow input/output devices to complete their tasks. Due to this reason, computer experts were looking for a way where many jobs or tasks could share the resources of the computer to achieve better utilization. This is referred to as multiprogramming and multiprocessing. Multiprocessing involves the simultaneous operation of many jobs that are competing to share the computer's resources.

During the 1960s, several groups in industry and the universities pioneered in coming up with time-sharing operating systems. Time-sharing is a special case of multiprogramming in which users access the computer through terminals, typically devices with keyboards and screens. This allows dozens or even hundreds of users share the same computer at once. The computer actually does not run all the jobs/ programs simultaneously, rather, it runs a small portion of one user's job, then moves on to service the next user, perhaps providing service to each user several times per second. Thus, the users' programs appear to be running simultaneously. An advantage of time-sharing is that user requests almost receive immediate responses.

Apple Computer popularized personal computing in the year 1977. At this time computers had become so economical that people could buy them for their own personal or business use. In 1981, IBM, the world's largest computer vendor, introduced the IBM Personal Computer. This then to a large extend legitimized personal computing in business, industry and government organizations.

These computers were "standalone" units, to share information between these computers, people transported disks back and forth between them to share information (often called "sneakernet"). Although early personal computers were not powerful enough to timeshare several users, these machines could be linked together in computer networks, sometimes over telephone lines and sometimes in local area networks (LANs) within an organization. This led to the aspect of distributed computing, in which instead of an organization's computing being performed only at some central computer installation, is distributed over networks to the sites where the organization's work is performed. Personal computers were and still are powerful enough to handle the computing requirements of individual users as well as the basic communications tasks of passing information between computers electronically.

Information is shared easily across computer networks where computers referred to as file servers offer a common data store that may be accessed and used by client computers distributed throughout the network, hence the term client/server computing. Java is one of the most widely used programming languages for writing software for computer networking and for distributed client/server applications. Modern operating systems, such as UNIX, Linux, Apple Mac OS X (pronounced "O-S ten") and Microsoft Windows, have networking capabilities.

The History of Java
Java evolved from C++, and C++ from C and C evolved from BCPL and B. BCPL was developed in 1967 by Martin Richards as a language for writing operating systems software and compilers. In 1970, Ken Thompson modeled many features in his language B after their counterparts in BCPL, using B to create early versions of the UNIX operating system at Bell Laboratories. The C language was evolved from B by Dennis Ritchie at Bell Laboratories and was originally implemented in 1972. C initially became widely known as the development language of the UNIX operating system. Most of the code for general-purpose operating systems today (e.g., those found in laptops, desktops, workstations and small servers) is written in C or C++. C++, which is an extension (advancement) of C, was developed by Bjarne Stroustrup in the early 1980s at Bell Laboratories. C++ provides a number of features that refines the C language, but more important, it provides capabilities for object-oriented C++ is a hybrid language, it is possible to program in either a C-like style,an object-oriented style or both.

Java
The evolution of microprocessors made possible the development of personal computers ( now number in the hundreds of millions worldwide). Personal computers have profoundly influenced people's lives and the way organizations conduct and manage their business. Recognizing the profound impact of microprocessors on electronic devices, Sun Microsystems funded an internal corporate research project code-named Green in 1991, this resulted in the development of a language based on C++ whose creator (James Gosling), called Oak (named after an oak tree outside his window at Sun). Later, it was discovered that there already was a computer language called Oak. The name Java was suggested over a cup of coffee when Sun people visited a local cofee shop, and it stuck. However, the Green project ran into some difficulties as the marketplace for intelligent consumer-electronic devices was not expanding (in the 1990s) as quickly as Sun had anticipated and the project was in danger of being canceled. By sheer good fortune, at around the same period, the World Wide Web exploded in popularity (in 1993), with this, the Sun people saw the opportunity; the immediate potential of using Java to add dynamic content, such as interactivity and animations, to Web pages (this breathed new lease of life into the Java project). Java was formally announced by Sun Microsystems at a major conference in May 1995. With its inception, Java caught the attention of the business community because of their phenomenal interest in the World Wide Web. Currently Java is used to develop large-scale enterprise applications, to enhance the functionality of Web servers (the computers that provide the content we see in our Web browsers), to provide applications for consumer devices (e.g., mobile phones, personal digital assistants and pagers) and for many other tasks. Java, through a technique called multi-threading, enables programmers to write programs with parallel activities.

Resources

 * 1) http://java.sun.com/j2se/5.0/download.jsp.

Links

 * 1) Java Identifiers
 * 2) Java Variables and Constants
 * 3) Assignment Statements and Assignment Expressions
 * 4) Java Character and Sting Data Type and Operations
 * 5) Java Numeric Data Types and Operations
 * 6) Java Input / Output