Cloud Computing and Load Balancing

Cloud Computing and Load Balancing

Load Balancing on the idea of Cloud Environment. Cloud computing can either be in a static or dynamic environment based on the load and sort of nodes’ distribution.

Static Environment. In a fixed environment, the cloud provider installs homogeneous resources. Also, the resources within the cloud aren’t flexible when the ground is formed static. These user requirements aren’t subjected to any change at run- time. Algorithms proposed to realize load balancing during a static climate cannot adapt to the load’s run time changes. However, the inert environment is simpler to simulate but isn’t compatible with heterogeneous cloud environments. Round Robin algorithm provides load balancing during a static climate. During this, the resources are provisioned to the task on first-cum-first-serve (FCFS- i.e. the job that entered first will be first allocated the resource) basis and scheduled in a sharing manner.
The least stacked asset (the hub with the least number of associations) is designated to the task. Eucalyptus uses greedy (first-fit) with round-robin for VM mapping. Radojevic proposed an improved algorithm over round-robin called CLBDM (Central Load Balancing Decision Model). It uses the idea of RR, but it also measures the duration of the connection between client and server.

Dynamic Environment. In a dynamic environment, the cloud provider installs heterogeneous resources. The resources are flexible during an emotional climate. During this scenario, the cloud cannot believe in prior knowledge, whereas it considers run-time statistics. The wants of the users are granted flexibility (i.e. they’ll change at run-time). The algorithm proposed to realize load balancing during a dynamic environment can quickly adapt to run time changes in load. Dynamic settings are challenging to simulate but are highly adaptable with cloud computing environments. Based on the WLC(weighted least connection) algorithm, Ren proposed a load balancing technique during a dynamic climate called ESWLC. It allocates the resource with the least weight to a task and takes under consideration node capabilities. Supported the load and capabilities of the node, a job is assigned to a node. LBMM (Load Balancing Min-Min) algorithm uses three-level frameworks for resource allocation during a dynamic environment. Since the cloud is massively scalable and autonomous, dynamic scheduling may be a more sensible choice over static scheduling.

Load Balancing supported Spatial Distribution of Nodes. Nodes within the cloud are highly distributed. Hence the node that creates the provisioning choice likewise administers the class of calculation to be utilized. There are often three sorts of algorithms that specify which node is liable for balancing load during a cloud computing environment—centralized Load Balancing. In a centralized load balancing technique, all the allocation and scheduling decisions are made by one node. This node is liable for storing the whole cloud network’s knowledge domain and may apply a static or dynamic approach for load balancing. This system reduces the time required to research different cloud resources but creates an excellent overhead on the centralized node. Also, the network is no longer fault-tolerant during this scenario as the overloaded centralized node’s failure intensity is high. Recovery won’t be easy just in case of node failure—distributed Load Balancing. In the distributed load balancing technique, no single node is liable for making resource provisioning or task scheduling decisions. There’s no single domain accountable for monitoring the cloud network; instead, multiple domains monitor the network to form accurate load balancing decisions.
Every node within the network maintains the local knowledge domain to ensure efficient distribution of tasks in a static environment and re-distribution in a dynamic environment. In a distributed scenario, the failure intensity of a node isn’t neglected. Hence, the system is fault-tolerant and balanced. No single node is overloaded to form load balancing decisions—a nature-inspired solution called Honeybee Foraging for load balancing in distributed scenarios. In Honeybee foraging, the movement of ants searching for food creates the idea of distributed load balancing in cloud computing environments. This is often a self-organizing algorithm and uses a queue arrangement for its implementation. Biased sampling is another distributed load balancing technique which uses virtual graphs because of the knowledge domain.

Hierarchical Load Balancing. Hierarchical load balancing is that the mixture of all the above load balancing environments. These are often modelled using tree arrangement wherein every node within the tree is balanced under its parent node’s supervision. Master or manager can use a light-weight agent process to urge statistics of slave nodes or child nodes. Based upon the knowledge gathered by the parent node provisioning or scheduling decision is formed. Three-phase hierarchical scheduling has multiple phases of scheduling. Request monitor acts as ahead of the network and is liable for monitoring service managers which successively monitor service nodes. The first phase uses the BTO (Best Task Order) scheduling. The second phase uses EOLB (Enhanced Opportunistic Load Balancing) scheduling, and the third phase uses EMM (Enhanced Min-Min) scheduling

Related Posts


Powered by WhatsApp Chat

× How can I help you?