B. DVFS-enabled power
helps reduce power consumption budget by migrating VMs. But, power consumption
within a server during VM migration overcomes the limited support which offered
by CPU architecture for DVFS application. The pro-posed approach considers VM
CAP value to decrease power consumption. In order to handle the proposed scheme
that has reduced processor clock rate to power consumption within a certain
limit. DVFS technology makes use of the relation of voltage, frequency, and
processor speed to adjust CPU clock rate 3. A power capping based VM
migration scheme was discussed in that prioritizes the VM migration. PMapper is
a power-aware application placement framework that considers power usage and
migration cost while deciding on application placement within a DC. Moreover,
during VM migration, the power manager adaptively applies DVFS to balance power
efficiency and SLA guarantee. The PMapper architecture is based on three
modules, namely performance manager, power manager, and monitoring engine. For
optimal VM placement while considering power efficiency and application SLA,
PMapper uses bin packing heuristics to map VMs on a suitable server.
Furthermore, the monitoring engine module gathers server/VM resource usage and
power state statistics before forwarding them to the power and performance.
Furthermore, it sorts the servers based on resource usage and power consumption
to choose the most suitable server based on resource availability and power
consumption estimates to host the workload. It also identifies underutilized
servers according to resource usage statistics and emigrates the load to other
servers to shut down servers for power efficiency. It allocates workload based
on minimizing energy consumption policy. A scheduling algorithm was proposed to
utilize DVFS methods to limit the power consumption budget within a DC. The
proposed scheduler dynamically checks application processing demands and
optimizes energy consumption using DVFS. Based on adaptive DVFS-enabled power
efficiency controller, hierarchical controller for power capping, integrate
power efficiency with power capping. The control system architecture design
consists of an efficiency controller, server capper, and group capper. The
efficiency controller is responsible for tracking the demands of individual
servers, But the server capper throttles power consumption according to
feedback. In addition to power distribution unfairness, the proposed scheme
assumes the server group configuration and power supply structure are flat.
However, they are actually hierarchical of the group capper throttles power
consumption at the server group level.
C. Storage optimization
model consists of two components, target server and proxy server connected to
source and destination servers through a network block device connection.
When-ever the destination storage is completely synchronized with the source,
the connection is demolished to release source server resources. Prototype
implementation of I/O blocked live storage migration rapidly relocates disk
blocks within WAN links with minimum impact on I/O performance. The on demand
method fetches memory blocks from the source when they are not available at the
destination server. However, storage sharing between sender and target servers
at distant locations over the Internet. The experiments revealed that I/O
performance improved significantly compared to conventional remote storage
migration methods in terms of total migration time and cache hit ratio.
Therefore, to efficiently utilize band-width capacity, the background copy
method is improved with compression using. Storage migration schemes
comparison. Introducing compression enhances network performance in terms of
bandwidth utilization. LZO algorithm to reduce total transferred data for
storage synchronization and migration time. In case of connection failure
during storage migration, the hosted application’s performance significantly
degrades and the system may crash. The limited WAN bandwidth degrades the live
storage migration process. Bitmap based storage migration scheme has employed
simple hash algorithm such as SHA-1 to create and transfer a list of storage
blocks called sent bitmapto the destination server. However, in order to
migrate back VMs after server maintenance, an intelligent incremental migration
(IM) approach is proposed that only transfers blocks that are updated after
migration from the source to reduce migration time and total migration data.
Syn-chronous replication is costly as it affects running applications, network,
and system resources a cooperative, context-aware migration approach was
proposed, which enables the migration management system to arrange DC migration
across server platforms.
In this paper,
the notions of cloud computing, VM migration, storage migration, server consolidation,
and dynamic voltage frequency scaling based power optimization are dis-cussed.
The large size of VM memory, unpredictable workload nature, limited bandwidth
capacity, restricted resource sharing, inability to accurately predict
application demands, and ag-gressive migration decisions, call for dynamic,
lightweight, adaptive, and optimal VM migration designs in order to improve
application performance. Furthermore, the inclusion of heterogeneous,
dedicated, and fast communication links for storage and VM memory transferring
can augment the application performance by reducing total migration time and
application service downtime. Several server consolidation frameworks colocate.
The VM memory size, unpredictable workload nature, limited bandwidth capacity,
restricted re-source sharing, inability to accurately predict application
de-mands, and aggressive migration decisions, call for dynamic,
adaptive, and optimal VM migration designs in order to improve application
performance. Furthermore, the inclusion of heterogeneous dedicated, and fast
communication links for storage and VM memory transferring can augment the
application performance by reducing total migration time and application
service downtime. The lightweight VM migration design can reduce the overall
development efforts, augments the application performance, and can speed-up the
processing in Cloud Data Center.