PROPOSING A LOAD BALANCING ALGORITHM WITH AN INTEGRATIVE APPROACH TO REDUCE RESPONSE TIME AND SERVICE PROCESS TIME IN DATA CENTERS

Goal: Load balancing policies often map workloads on virtual machines, and are being sought to achieve their goals by creating an almost equal level of workload on any virtual machine. In this research, a hybrid load balancing algorithm is proposed with the aim of reducing response time and processing time. Design / Methodology / Approach: The proposed algorithm performs load balancing using a table including the status indicators of virtual machines and the task list allocated to each virtual machine. The evaluation results of response time and processing time in data centers from four algorithms, ESCE, Throttled, Round Robin and the proposed algorithm is done. Results: The overall response time and data processing time in the proposed algorithm data center are shorter than other algorithms and improve the response time and data processing time in the data center. The results of the overall response time for all algorithms show that the response time of the proposed algorithm is 12.28%, compared to the Round Robin algorithm, 9.1% compared to the Throttled algorithm, and 4.86% of the ESCE algorithm. Limitations of the investigation: Due to time and technical limitations, load balancing has not been achieved with more goals, such as lowering costs and increasing productivity. Practical implications: The implementation of a hybrid load factor policy can improve the response time and processing time. The use of load balancing will cause the traffic load between virtual machines to be properly distributed and prevent bottlenecks. This will be effective in increasing customer responsiveness. And finally, improving response time increases the satisfaction of cloud users and increases the productivity of computing resources. Originality/Value: This research can be effective in optimizing the existing algorithms and will take a step towards further research in this regard.


INTRODUCTION
The structuring and implementation of the Industry 4.0 context is currently undergoing an evolution process and presents companies to the trend of a new business model format. Essentially, the Industry 4.0 environment has a high degree of technological development and collaborative structure, which is characterized mainly by the communication between different agents (hardware, software, data, people), allowing the exchange, storage, and interpretation of data in an intelligent system (Cordeiro et al., 2019) Today, cloud computing has become common in IT, and is one step after the evolution of the Internet. Cloud computing provides an enormous amount of storage and computing services to users through the Internet. Cloud computing has emerged as a popular computing model for hosting largescale computing systems and services. Recently, significant research on resource management techniques, focused on optimizing cloud resources among several users, has been provided. Resource management techniques are designed to improve the various parameters in the cloud (Dhanasekar et al., 2014).
The basic technology for cloud computing is the "virtualization" that separates resources and services from the underlying physical layer to provide multiple dedicated resources in the form of a virtual machine. The term cloud also refers to this basic concept (Barzegar et al., 2014). In the cloud environment, almost all virtualization resources are virtualized and shared among multiple users (Arianyan et al., 2015). "Virtualization is, in fact, the implementation of computer software that runs different programs just like a real machine. Virtualization has a close relationship with the cloud, because an end user can use cloud services through virtualization (Padhy and Rao, 2011).
Load balancing is an essential operation in cloud environments. Because cloud computing is growing fast and many customers all over the world are demanding more services and better results, load balancing is an important and necessary area of research. Many algorithms have been developed for allocating customer requests to available remote nodes.
Effective load balancing ensures the efficient resource productivity of resources for customers according to demand" (Panwar and Mallick, 2015).
In this paper, cloud computing and load balancing were first identified as one of the methods for resource management in cloud computing. Then, by examining some load balancing algorithms, a load balancing algorithm that balances the workload on virtual machines using an integrative approach of available load balancing algorithms will be presented.

CLOUD COMPUTING
According to the definition of the National Institute of Standards and Technology (NIST), cloud computing is a model for providing easy access to a set of changeable and configurable computing resources, such as networks, servers, storage spaces, applications, and services, that is accessible through the network based on user request rapidly and with the least management operation and the least interaction, providing services (Sahu et al., 2013).

NIST cloud reference architecture components
• Cloud Provider: The person, organization or entity responsible for making a service available to cloud users. The architecture introduced by the NIST consists of four major components. The provider has six components: security, privacy, cloud services management, service layer, physical resources layer and control layer, and resource abstraction. Cloud services management includes business support, supply and configuration, and portability and collaboration. Also, in the service layer, SaaS, PaaS, and IaaS are the main and most commonly used models in cloud computing. Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). Table 1 shows consumer activities and cloud providers in these service models. • Cloud Consumer: A person or organizati on that uses services of cloud providers to establish a business relati onship • Cloud Auditor: This is a component that can assess the behavior of cloud services, informati on system performance, effi ciency, and security independently of cloud implementati on. In the NIST architecture of security audit, the impact of privacy and effi ciency are in this secti on. Regarding security, a cloud auditor can have an assessment of security controls in the informati on system in order to determine how well controls are implemented and how acti viti es are taken into account and produce sati sfactory results considering meeti ng the system needs.

• Cloud Broker:
Is an enti ty that manages the use, performance and delivery of cloud services, and negoti ates relati onships between cloud providers and cloud consumers. In the NIST reference architecture, there are intermediary services, aggregati on and connecti on, and transacti ons.

• Cloud Carrier:
The intermediary that provides connecti vity and transport of cloud services between cloud providers and cloud consumers.

Implementation models in cloud computing
The implementati on models in cloud computi ng include: • Private Cloud: In this type of cloud, corporate employees can access company or colleague data.
• Community Cloud: When multi ple companies share their resources in the cloud, the created cloud is called community. This type of cloud is used by organizati ons with similar interests and common security needs.
• Public Cloud: Anyone from anywhere in the world can access it. Examples of these clouds are Google Cloud, which is open to everyone aft er a specifi c service level agreement (SLA) between the provider and the user. Public cloud is available based on a common ground.
• Hybrid Cloud: It is a combinati on of both public and private clouds.

LOAD BALANCING
Load balancing is the process of reallocati ng the enti re load to the unique nodes of a collecti ve system. Its objecti ve is to eff ecti vely uti lize resources, to improve the response ti me of a task, and to simultaneously eliminate a situati on in which a number of nodes are strongly loaded, while others have a low load. Load balancing is a mechanism to increase service level agreement and bett er use of resources. The load considered can be the CPU load, the amount of memory used, the delay, or the network load. In fact, the goal of load balancing is to fi nd a proper task mapping on system processors, so that the overall run ti me in each processor, almost a same amount of task, reaches its lowest amount (Kaur and Kaur, 2015). Load balancing is a new technique that provides high resource ti me and eff ecti ve resource effi ciency by allocati ng total load to diff erent cloud nodes. Load balancing solves the overload problem and focuses on maximizing operati onal power, opti mizing resource use, and minimizing response ti me. Load balancing is the prerequisite for maximum cloud performance and eff ecti ve use of resources (Panwar and Mallick, 2015). By load balancing, one can balance the load by dynamically transferring the local task amount from a machine to another machine in a remote node or a less-used machine. This maximizes user sati sfacti on, minimizes the response ti me, increases resource uti lizati on, reduces the number of task rejecti ons (the tasks that are given back) and raises system performance (Khetan et al., 2013). In recent years, many researchers have come up with various ideas for solving the resource management problem through load balancing. In most load balancing methods, the migrati on approach between servers is used. Tasks have been migrated between virtual machines to balance load on servers, as shown in Figure 1 (Mustafa et al., 2015).
This figure shows, in the top section, the status of virtual machines before load balancing and, in the bottom section, the status of virtual machines after load balancing. As can be seen in the figure, before the load balancing, the two machines are fully loaded and have reached the over load status, while more than half of the capacity of the other two virtual machines is empty. In the second section of the figure, after the load balancing, all the virtual machines are almost in the same loading situation. In the load balancing at the level of virtual machines, the task load on the virtual machines is distributed. At this level, a task mapping to virtual machines is created. Load balancing at this level determines which task is allocated to which virtual machine.

Load Balancing Algorithms
In general, the load balancing algorithms design is performed taking into account the two main goals of providing and increasing the use of cloud resources. Scheduling algorithms for virtual machines require load balancing to eff ecti vely allocate virtual machines. In fact, load balancing algorithms decide which virtual machine will be allocated based on the cloud user request. So far, a large number of load balancing algorithms have been proposed; three popular algorithms used in the proposed approach of this paper are evaluated as follows:

Round Robin Algorithm
A round robin algorithm uses a simple technique to distribute all processes over all available processors. In this algorithm, the same task load is distributed on the processors. The algo-rithm also operates on the basis of random selecti on of the virtual machines, and the data center controller assigns requests to a list of virtual machines in a round way (Bhathiya, 2009).

Throttled Algorithm
The equally spread current executi on algorithm goes through some steps, taking into account the prioriti es (Bhathiya, 2009). The distributi on of load equally with the load transfer from overloaded servers to light-loaded servers improves performance (Hu et al., 2010). In this algorithm, the load balancer consistently monitors the task queue for new acti viti es and then assigns these tasks free virtual machines from the resource pool. The load balancer also uses the list of tasks allocated to virtual machines to help detect free machines and assign them to new tasks.

Equally Spread Current Execution Algorithm
The equally spread current executi on algorithm goes through some steps, taking into account the prioriti es (Bhathiya, 2009). In this algorithm, the load balancer consistently monitors the task queue for new tasks and then assigns these tasks free virtual machines from the resource pool. The load balancer also uses the list of tasks allocated to virtual machines to help detect free machines and assign them to new tasks (Domanal and Reddy, 2013). The equally spread current executi on algorithm is the same (opti mized) Acti ve Monitoring Load Balancer algorithm. The load distributi on process is shown in Figure 4. amount. In the research using the SimGrid simulator, some of the bed test scenarios were considered and several QoS criteria were evaluated to demonstrate the uti lity of the proposed algorithm. Rezaei et al. (2011) presented data center architecture for cloud computi ng that manages system resources to balancedly distribute load between data center resources and reduce power consumpti on. Failure to distribute the load balancedly can lead to reducti on in terms of effi ciency and vulnerability of the data center. Virtualizati on is a technology used at such centers and makes the live transmission of virtual machines possible. Moreover, in this research, an algorithm is presented that distributes the available load balancedly between diff erent sources according to the producti vity of the servers or hosts inside the data center. This system was evaluated based on the simulati on and reallocati on of virtual machines based on their producti vity and the use of live transmission. The results show that the proposed algorithm causes load distributi on and ensures SLA (Service level Agreement) properly (Rezaei et al., 2011).
In 2013, Mousavian Qalashqaei and Shiri opti mized the load balancing on virtual machines at a rate of 20% by combining meta-heuristi c methods. In this paper, a new method is proposed to fi nd suitable soluti ons for mapping a set of requests to the available resources of the system, according to the conditi ons of cloud computi ng systems. In this method, they used the combinati on of the tabu search algorithm and the evoluti onary algorithm mutati on strategy (Mousavian Qalashqaei and Shiri, 2013). Barani et al. (2015) performed load balancing to reduce virtual machine load. They provided an algorithm based on the processing power and the task load of virtual machines in cloud computi ng, comparing it against response ti me and Makespan with a number of other load balancing algorithms and, by performing the simulati on, they found that the algorithm has an appropriate response ti me and makespan compared to previous algorithms. The makespan ti me is the ti me diff erence between the beginning and the end of a sequence of work or tasks in the system. This ti me is very important to measure the usefulness of the system. It is bett er to reduce this standard (Barani et al., 2015). Chanaghlou and Dolati (2016), in a study enti tled "Providing a Hybrid Multi -Objecti ve Scheduling and Load Balancing in Cloud Computi ng", presented two algorithms for improving load balancing and task scheduling. The researchers concluded that the balance algorithm called Hypertext Markup Language (HMTL) has the ability to achieve load balancing goals and minimize overall runti me. It also uses the policy of reducing the number of task migrati ons. The scheduling algorithm, enti tled LDTS (Linear Decision Trees), also assigns new tasks to system processing nodes by computi ng its cur-

RESEARCH BACKGROUND
The method proposed by Hu et al. in 2010 has used the geneti c algorithm for the load balancing between virtual machines, and it has also examined, in additi on to the system current state, system changes and historical data. This method also calculates the eff ects of implementi ng virtual machines on host machines beforehand. Through this method, load balancing is achieved and the dynamic migrati on of virtual machines is reduced.
In 2013, Professor Soundarajan et al. have proposed a load balancing algorithm to opti mize the use of resources in the cloud environment. The algorithm is a dynamic resource management method. In this algorithm, the goal is to efficiently distribute the load on accessible virtual machines that are not at the upper or lower limit. The simulati on results show that this algorithm improves the use of resources and reduces response ti me.
Razali et al. presented a virtual machine classifi cati on according to their implementati on ti me for load balancing. In this way, virtual machines migrated to two diff erent classes of resources: high-power host and low-power host based on MIPS (Million Instructi ons per Second). Virtual machine migrati on is based on the CPU uti lizati on in steady conditi ons. Using this method, the number of migrati ons is minimized and energy is saved in idle state (Razali et al., 2014). Chen et al. (2017), in a study enti tled "A novel load balancing architecture and algorithm for cloud services", a method for making dynamic balance was proposed for solving the problem of overload in cloud balancing. In this method, both server processing and computer loading are considered, and fi nally, the two algorithms to prove the proposed innovati ve approach were examined.
In 2018, Coutourier et al. investi gated and introduced the best strategy for asynchronous iterati ve load balancing in cloud computi ng. The research purpose was to introduce a new strategy called the best att empt to balance the load of a node in all its loaded neighbors, while ensuring that all nodes involved in the load balancing step receive an equal rent task load. Simulati ons have shown that the LDTS algorithm has improved load distributi on. The HMTL algorithm has also improved parameters, such as moment load balancing, total load balancing, task load distributi on, and overall runti me (Chanaghlou and Dolati , 2016).
In 2017, Derakhshanian et al. investi gated the load balancing in cloud computi ng environment, taking into account the dependence between tasks and the use of adapti ve geneti c algorithm. Considering interacti ons between tasks, the purpose of this study was to provide a method for an opti mal load balancing in the network, so that the total completi on ti me and the idle ti me of the machines would be minimized. The experimental results showed that the localizati on of interacti ons would have a signifi cant eff ect on reducing the total completi on ti me (Derakhshanian et al., 2017).
In 2018, Mishkar et al. opti mized task scheduling and load balancing in the cloud environment using the Ant Colony Algorithm. The purpose of this study was not merely to schedule tasks, but also to examine load balancing on machines. To do this, scheduling with the ant colony opti mizati on algorithm was used, which provides eff ecti ve soluti ons to many dynamic problems. In this research, the problem statement and the scheduling problem and related tasks were menti oned, and then defi niti ons related to task scheduling and cloud environment were proposed, and then all the steps of the algorithm were followed, and, fi nally, load balancing was performed (Mishkar et al., 2018).

RESEARCH METHODOLOGY
The proposed algorithm is a hybrid algorithm, a combinati on of two techniques used in two other virtual algorithms, Thrott led and ESCE. In the proposed algorithm, using the Thrott led algorithm, the states of virtual machines are obtained. The ESCE algorithm is also used to monitor and assign tasks to virtual machines. Acti ve load balancing algorithms always monitor the job queue so that they can assign them to free or idle machines. It also maintains a list of orders for allocati on to any virtual machine. This list can determine the overloaded or low-loaded conditi ons in a ti me slice. Based on this informati on, the balancer is transmitt ed from overloaded machines to low-load machines so that the virtual machines reach a load balancing level.
The proposed algorithm is designed to improve response ti me and processing ti me. To achieve this goal, the virtual machine algorithm, with the least load, initi ally proposes reducing the search overload to fi nd a machine that can do longer work and improve response ti me. In a data center, tasks and requests are received from user centers. The data center controller fi nds a virtual machine for each job that can do that. Figure 5 shows the conceptual model of the proposed algorithm. In the fi gure, the virtual machine (VM) hosts: cloud resources; Cloudlet are the same jobs; and DCC: Data Center Controller.
In each data center, there are a number of physical servers (Host), which include virtual machines (VMs). Jobs (Cloudlet) are received by the data center controller for executi on and processing, and are allocated to virtual machines. In fact, users send their jobs to the data centers where the jobs are allocated to servers and to the virtual machines inside the servers. Step 1: The algorithm keeps a list of VMs, their status (occupied / free), and the tasks that are currently allocated to them.

• Step 2:
The data center controller receives requests from cloud clients.

• Step 3:
The data center controller asks the algorithm about available virtual machines.

• Step 4:
The algorithm performs the following steps: The next available virtual machine fi nds the status of virtual machines using the table. If the machine is idle, it goes to Step 5 and sends the machine ID to the controller. If the machine is not idle, it goes to the following stages.
• It examines whether the virtual machine capacity is greater than zero and the number of current allocati ons of that machine is lower than the number of machine allocati ons that is consid- ered among the VM list as the maximum number of allocati on; it selects the VM as the virtual machine that has the minimum.
• Return of the VM ID of the virtual machine that has the minimum load.
• Step 5: The data center controller places the job on the VM sent from the algorithm.

•
Step 6: If the virtual machine that has the minimum task load is overloaded.

• Step 7:
The data center controller sends a reply to the job and transmits it to the pool of awaiti ng tasks.

• Step 8:
The controller conti nues the job, restarti ng it from Step 2.
In the proposed algorithm, the informati on is fi rst collected from the status quo, the VMs, their status (occupied / free), and the tasks that are currently allocated to them. The data center controller goes to the task list to allocate that job to a VM to do that. While the controller recognizes VMs by their IDs, it requests the algorithm to introduce a VM. The algorithm has a list of VMs and their status also includes the number of tasks that are being performed on each VM. For a VM, if the current allocati on number is zero, it means that the machine is idle, whereas if the number is higher than zero, it means that the machine has not yet completed previous jobs. If the allocati on number is lower than the maximum number of allocati ons, then this machine can do other jobs. Therefore, no machine will be idle and machines with the lowest allocati on will be considered as the fi rst opti on for allocati ng tasks. The ID algorithm sends the selected virtual machine to the data center controller and the controller checks if the selected machine can do the job, places the job on the VM, and announces the algorithm to update its table and add a task to the work being done by this machine. However, if this job cannot be done on this machine, since the machine has been selected with the minimum load, there is no other machine that can do it; thus, the data center controller returns the job to the pool to wait and receives another job from the user's requests list. This process conti nues unti l all jobs are done. This process is shown in Chart 1.

Implementation
In the implementati on, the CloudAnalyst simulator is used. This simulator is a CloudSim-based visual design and has been used in most of the studies on load balancing in cloud computi ng. The CloudAnalyst simulator easily covers any load balancing policy at the virtual machine level. The graphical user interface of this simulator can receive the setti ngs in an interacti on and, aft er implementi ng the load balancing policy, presents the results in the form of charts and tables. The code source for the proposed hybrid algorithm is added to the CloudAnalyst simulator code source set in Java via the Net-Beans IDE 8.0 soft ware and setti ngs include confi gurati ons for the user base, confi gurati on of program development, user grouping, data center setti ngs, and the physical hardware of each data center, shown in Tables 1 to 5.

RESULTS
The setti ngs considered in the simulati on model have been done on the algorithm. The purpose of this research is to reduce the response ti me and processing ti me in data centers. Therefore, the criteria for evaluati on in the results include the response ti me and the processing ti me in the data centers. Subsequently, these criteria are evaluated on four algorithms, ESCE, Thrott led, Round Robin and the proposed algorithm for this research (MyHybrid). The results on how to place user bases and data centers are shown in Figure 6.

Results in the response time criterion
The chart shows the response ti me to service in each data center in milliseconds for four algorithms. As shown in the chart, the average response ti me of the proposed algorithm is lower for each user base.
The average of this ti me is shown in Chart 2. Chart 2. Average response ti me in milliseconds for each data center Chart 3 shows the average response ti me in milliseconds for each user base for the four algorithms. As shown in the chart, the average response ti me of the proposed algorithm is lower in the user base.
Chart 3. Average response ti me in milliseconds for each user base for the four algorithms  Figure 6. way of placing user bases and data centers  Table 7. Results in the overall data center processing ti me for the all algorithms in milliseconds (ms)

Results in the processing ti me criterion in data centers
The obtained results in the overall data center processing ti me for all algorithms are shown in Table 7.
The results obtained from the response ti me and processing ti me evaluati on in data centers on the four algorithms, ESCE, Thrott led, Round Robin and the proposed algorithm of this research (MyHybrid) show that the overall response ti me and data processing ti me in the data center for the proposed algorithm of the research is lower than the other algorithms compared.

CONCLUSION
This paper focuses on the task load balancing of the hosts and att empts to provide almost equal task loads for all hosts.
In this study, a combined load balancing algorithm was proposed from two existi ng ESCE and Thrott led algorithms. The noti ce of the virtual machines' status and the number of assignments are the two main characteristi cs in the proposed hybrid algorithm. These two att ributes, each of which is derived from an algorithm, are: The proposed algorithm helps the data center controller to choose between the machines that can do it (machines available), a machine that is either idle or has the smallest load. This action reduces the processing and time overhead for looking for a virtual machine, especially for more work and improved processing time and response time.
In the implementation, after analyzing the CloudSim and CloudAnalyst simulators, the source code for the proposed hybrid algorithm was added to the Java language, and via the NetBeans IDE 8.0 software to the CloudAnalyst emulation source code set. Through the settings through the Clou-dAnalyst Home Page, the proposed algorithm was evaluated with three other algorithms: Round Robin, ESCE, and Throttled.
The results of the simulations performed according to the simulation model for the four algorithms show that the proposed algorithm has better response time and processing time than the other three algorithms.
The results of the overall response time for all algorithms show that the response time of the proposed algorithm is 12.28%, compared to the Round Robin algorithm, 9.1% compared to the Throttled algorithm, and 4.86% for the ESCE algorithm.