Grid common goal. The grid can be thought

Grid Computing:Grid computing is the collection of computer resources from multiple locations to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed (thus not physically coupled) than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large. Grids are a form of distributed computing whereby a “super virtual computer” is composed of many networked loosely coupled computers acting together to perform large tasks. For certain applications, distributed or grid computing can be seen as a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a computer network (private or public) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus.A grid is connected by parallel nodes that form a computer cluster, which runs on an operating system, Linux or free software. The cluster can vary in size from a small work station to several networks. The technology is applied to a wide range of applications, such as mathematical, scientific or educational tasks through several computing resources. It is often used in structural analysis, Web services such as ATM banking, back-office infrastructures, and scientific or marketing research.The idea of grid computing was first established in the early 1990s by Carl Kesselman, Ian Foster and Steve Tuecke. They developed the Globus Toolkit standard, which included grids for data storage management, data processing and intensive computation management.Grid computing is made up of applications used for computational computer problems that are connected in a parallel networking environment. It connects each PC and combines information to form one application that is computation-intensive.Grids have a variety of resources based on diverse software and hardware structures, computer languages, and frameworks, either in a network or by using open standards with specific guidelines to achieve a common goal.Grid operations are generally classified into two categories:• Data Grid: A system that handles large distributed data sets used for data management and controlled user sharing. It creates virtual environments that support dispersed and organized research. The Southern California Earthquake Center is an example of a data grid; it uses a middle software system that creates a digital library, a dispersed file system and continuing archive.• CPU Scavenging Grids: A cycle-scavenging system that moves projects from one PC to another as needed. A familiar CPU scavenging grid is the search for extraterrestrial intelligence computation, which includes more than three million computers.Grid computing is standardized by the Global Grid Forum and applied by the Globus Alliance using the Globus Toolkit, the de facto standard for grid middleware that includes various application components.Grid architecture applies Global Grid Forum-defined protocol that includes the following:• Grid security infrastructure• Monitoring and discovery service• Grid resource allocation and management protocol• Global access to secondary storage and GridFTPUnlike with parallel computing, grid computing projects typically have no time dependency associated with them. They use computers which are part of the grid only when idle and operators can perform tasks unrelated to the grid at any time. Security must be considered when using computer grids as controls on member nodes are usually very loose. Redundancy should also be built in as many computers may disconnect or fail during processing.Grid computing enables the creation of a single IT infrastructure that can be shared by multiple business processes. Oracle software is specifically designed for grid computing, delivering a higher quality of service to those business processes at a much lower cost.Clusters:In its most basic form, a cluster is a system comprising two or more computers or systems (called nodes) which work together to execute applications or perform other tasks, so that users who use them, have the impression that only a single system responds to them, thus creating an illusion of a single resource (virtual machine). This concept is called transparency of the system. As key features for the construction of these platforms is included elevation : reliability, load balancing and performance.Types of ClustersHigh Availability (HA) and failover clusters, these models are built to provide an availability of services and resources in an uninterrupted manner through the use of implicit redundancy to the system. The general idea is that if a cluster node fail (failover), applications or services may be available in another node. These types are used to cluster data base of critical missions, mail, file and application servers.Load balancing, this model distributes incoming traffic or requests for resources from nodes that run the same programs between machines that make up the cluster. All nodes are responsible to track orders. If a node fails, the requests are redistributed among the nodes available. This type of solution is usually used on farms of Web servers (web farm).HA & Load Balancing Combination, as its name says, it combines the features of both types of cluster, thereby increasing the availability and scalability of services and resources. This type of cluster configuration is widely used in web, email, news, or ftp servers.Distributed Processing and Parallel Processing, this cluster model improves the availability and performance for applications, particularly large computational tasks. A large computational task can be divided into smaller tasks that are distributed around the stations (nodes), like a massively parallel supercomputer. It is common to associate this type of Beowulf cluster at NASA project. These clusters are used for scientific computing or financial analysis, typical for tasks requiring high processing power.Reasons to Use a ClusterClusters or combination of clusters are used when content is critical or when services have to be available and / or processed as quickly as possible. Internet Service Providers (ISPs) or e-commerce sites often require high availability and load balancing in a scalable manner. The parallel clusters are heavily involved in the film industry for rendering high quality graphics and animations, recalling that the Titanic was rendered within this platform in the Digital Domain laboratories. The Beowulf clusters are used in science, engineering and finance to work on projects of protein folding, fluid dynamics, neural networks, genetic analysis, statistics, economics, astrophysics among others. Researchers, organizations and companies are using clusters because they need to increase their scalability, resource management, availability or processing to supercomputing at an affordable price level.High-Availability (HA) or Clusters FailoverThe computers have a strong tendency to stop when you least expect, especially at a time when you need it most. It is rare to find an administrator who never received a phone call in the middle of the morning with the sad news that the system was down, and you have to go and fix the problem.High Availability is linked directly to our growing dependence on computers, because now they have a critical role primarily in companies whose main functionality is exactly the offer of some computing service, such as e-business, news, web sites, databases, among others.A High Availability Cluster aims to maintain the availability of services provided by a computer system by replicating servers and services through redundant hardware and software reconfiguration. Several computers acting together as one, each one monitoring the others and taking their services if any of them will fail. The complexity of the system must be software that should bother to monitor other machines on a network, know what services are running, those who are running, and what to do in case of a failure. Loss in performance or processing power are usually acceptable, the main goal is not to stop. There are some exceptions, like real-time and mission critical systems.Fault tolerance is achieved through hardware like raid systems, supplies and redundant boards, fully connected network systems to provide alternative paths in the breaking of a link.Cluster Load BalancingThe load balancing among servers is part of a comprehensive solution in an explosive and increasing use of network and Internet. Providing an increased network capacity, improving performance. A consistent load balancing is shown today as part of the entire Web Hosting and eCommerce project. But you cannot get stuck with the ideas that it is only for providers, we should take their features and bring into the enterprise this way of using technology to heed internal business customers.The cluster systems based on load balancing integrate their nodes so that all requests from clients are distributed evenly across the nodes. The systems do not work together in a single process but redirecting requests independently as they arrive based on a scheduler and an algorithm.This type of cluster is specially used by e-commerce and Internet service providers who need to resolve differences cargo from multiple input requests in real time.Additionally, for a cluster to be scalable, must ensure that each server is fully utilized.When we do load balancing between servers that have the same ability to respond to a client, we started having problems because one or more servers can respond to requests made and communication is impaired. So we put the element that will make balancing between servers and users, and configure it to do so, however we can put multiple servers on one side that, for the customers, they appear to be only one address. A classic example would be the Linux Virtual Server, or simply prepare a DNS load balancer. The element of balance will have an address, where customers try to make contact, called Virtual Server ( VS ), which redirects traffic to a server in the server pool. This element should be a software dedicated to doing all this management, or may be a network device that combines hardware performance and software to make the passage of the packages and load balancing in a single device.Cloud Computing:In the simplest terms, cloud computing means storing and accessing data and programs over the Internet instead of your computer’s hard drive. The cloud is just a metaphor for the Internet. It goes back to the days of flowcharts and presentations that would represent the gigantic server-farm infrastructure of the Internet as nothing but a puffy, white cumulonimbus cloud, accepting connections and doling out information as it floats.What cloud computing is not about is your hard drive. When you store data on or run programs from the hard drive, that’s called local storage and computing. Everything you need is physically close to you, which means accessing your data is fast and easy, for that one computer, or others on the local network. Working off your hard drive is how the computer industry functioned for decades; some would argue it’s still superior to cloud computing, for reasons I’ll explain shortly.The cloud is also not about having a dedicated network attached storage (NAS) hardware or server in residence. Storing data on a home or office network does not count as utilizing the cloud. (However, some NAS will let you remotely access things over the Internet, and there’s at least one NAS named “My Cloud,”just to keep things confusing.)For it to be considered “cloud computing,” you need to access your data or your programs over the Internet, or at the very least, have that data synchronized with other information over the Web. In a big business, you may know all there is to know about what’s on the other side of the connection; as an individual user, you may never have any idea what kind of massive data-processing is happening on the other end. The end result is the same: with an online connection, cloud computing can be done anywhere, anytime.In the simplest terms, cloud computing means storing and accessing data and programs over the Internet instead of your computer’s hard drive. The cloud is just a metaphor for the Internet. It goes back to the days of flowcharts and presentations that would represent the gigantic server-farm infrastructure of the Internet as nothing but a puffy, white cumulonimbus cloud, accepting connections and doling out information as it floats.What cloud computing is not about is your hard drive. When you store data on or run programs from the hard drive, that’s called local storage and computing. Everything you need is physically close to you, which means accessing your data is fast and easy, for that one computer, or others on the local network. Working off your hard drive is how the computer industry functioned for decades; some would argue it’s still superior to cloud computing, for reasons I’ll explain shortly.The cloud is also not about having a dedicated network attached storage (NAS) hardware or server in residence. Storing data on a home or office network does not count as utilizing the cloud. (However, some NAS will let you remotely access things over the Internet, and there’s at least one NAS named “My Cloud,”just to keep things confusing.)For it to be considered “cloud computing,” you need to access your data or your programs over the Internet, or at the very least, have that data synchronized with other information over the Web. In a big business, you may know all there is to know about what’s on the other side of the connection; as an individual user, you may never have any idea what kind of massive data-processing is happening on the other end. The end result is the same: with an online connection, cloud computing can be done anywhere, anytime. Computer Architecture:”assignment :2″Hafiza Neha ShahidDEGREE:CE-38-Ajan 15, 2018