a. Virtual Machine (VM) is a stratum of software that imitates the hardware of an ordinary computer system. In this regard, the virtual-machine monitor A is an effective technique for adding features below ordinary operating system and software applications. For example, one form of virtual-machine monitor is generally referred to as Type II virtual-machine monitors (VMMs) that are based on the abstractions a host operating system delivers. Type II VMMs are sophisticated and expedient, but their performance in a standalone device is currently slower than that achieved while operating outside a virtual machine-. In broad terms, a virtual machine is a distinct and autonomous software order that comprises of the whole copy of an OS and application software (King, Dunlap, & Chen, 2003).
Its ability to merge servers is the most fascinating benefit of VMs. For instance, a non-virtualized application server has a maximum capacity of approximately may 5% to 10% utilization; however, a virtual server hosting several VMs is easily capable of realizing a utilization of between 50% and 80%. This means that it results to lower costs linked with hardware acquisition, repairs, as well as energy and cooling system usage. Secondly, VM tools have the ability to support server management because it can allocate and regulate the computing resources used by each VM. For instance, an administrator can spare the minimum required network bandwidth amount for a VM running a transactional application. On the other hand, a virtual file or print server is not capable of such reservation.
b. Briefly, Cloud computing denotes a hosting services menu typically delivered over the Internet on a metered basis, while co-currently leveraging infrastructure shared by a number of customers. As such, it has several benefits including Reduced Infrastructure Costs- it helps to reduce overhead costs such as management, IT personnel, data storage, real estate, bandwidth, and power. This is realized by its ability to absorb costs associated with software and hardware upgrades and repairs for obsolete network and security devices. Secondly, it has the ability to deploy capacity, scalability, and speed. Finally, cloud computing ensures data security and backup.
Conversely, cloud-computing das several risks including loss of control over data transmitted to the cloud and network accessibility outsourced to the cloud. For example, under the traditional IT setting, firms have the ability to evaluate and adjust their systems, as such, they are compliant with applicable regulations and standards. Another risk entails aggregation risk – the cloud creates a new aggregation exposure, this is the main reason most cloud providers are unwilling to offer more favorable contracts to cloud subscribers (Merrill, 2014).
c. Client/server model describes communications between computing progressions between service consumers/clients and the service providers/servers. The basic features include; clients and servers are functional modules equipped with specific interfaces, this means that they are capable of hiding internal information. Accordingly, the client and a server functions can be effected through a set of software components, hardware modules, or a combination of both. Secondly, every client/server relationship is established between two functional components. This means that the customer has the role of initiating a service request while the server elects to react to the service request. Moreover, all information exchange between clients and servers is conducted only through messages. Therefore, there is no information exchanged through other global variables. In other terms, the client/server model refers to a distributed application structure that divides tasks between the server and the client.
On the other hand, a peer-to-peer network has lacked a central server and each client on this network shares files equally with the other clients. Therefore, this means that it is ideal for small enterprises, unlike the client/server architecture. The main reason being that they are cheap to install, operate and maintain. Secondly, it has no dedicated server and hence it is portable and less tasking as it is user-friendly- no security protocols needed.
RAID 5 system
I. The concept of measuring the overall performance of a storage system, Input/output Operations per Second (IOPS) remains as the most used common metric. Accordingly, several factors should be considered when calculating the IOPS aptitude of a single storage system. The performance of a disk and IOPS is centered on three key factors:
- Rotational speed commonly referred to as the spindle speed. It is commonly measured on the parameters of revolutions per minute (RPM) – most disks for enterprise storage revolve at speeds of 7,200, 10,000 or 15,000 RPM. The higher the spindle speed, the higher the performance of a disk.
- Average latency. It denotes the time taken for the sector of the disk being retrieved to rotate into position under a read-write head.
- Average seek time. This is the time (in meters per seconds) it will take for the read/write head of a hard drive to position itself over the track being retracted.
II. Advantages of Raid 5
- Performance: In Raid 5, performance is increased from the fact that the server has more spindles that are used when reading or writing as compared to when data is retrieved from a drive.
- Availability: The accessibility is increased since the RAID controller is capable of re-creating lost data from parity information (a checksum of the data previously written to the disks, which is written co-currently with the original data).
III. The Effective Capacity of the 5-Drives RAID 5 System is:
- Capacity 4000 GB
- Speed gain 4x read speed, no write speed gain
- Fault tolerance 1-drive failure
b. How a Journey file system uses transaction logging to ensure data integrity during system failure while writing file to the hard disk.
Using NTFS as the base, it is important not that NTFS outlooks every I/O operation that changes a system file on the NTFS volume as a transaction, and subsequently manages every one of them as an integral element. Accordingly, after initiation, either the transaction is completed or, in case the disk fails, it is rolled back. To guarantee that a transaction’s action, NTFS registers the sub-operations of a transaction in a log file prior to writing them to the disk. In case a complete transaction is registered in the log file, NTFS executes the transaction’s sub-operations on the volume cache. Consequently, after NTFS apprises the cache, it stamps the transaction by recording its completion in the log file. After this stage, NTFS certifies that the entire transaction exists on the volume, even if the disk fails. Accordingly, when recovering, NTFS recreates each committed transaction stored in the log file, locates those transactions that were not committed in the log file at the time of the system failure, and unwraps each transaction sub-operation previously recorded.
c. Advantages and Disadvantages that Solid State Drives (SSD) have Over Hard Disk Drives (HDD)
- The SSD requires/ consumes less power than a standard HDD, thus, they are energy efficient and saves a laptop’s battery life.
- The SSD drive does not have any moving parts. As such, it uses flash memory for data storage, and this ensures that it delivers better performance and reliability than the HDD.
The cost of a solid-state drive is higher than that of an HDD. Subsequently, this may limit the storage capacity, as most users cannot afford.
a. The serial transmission stresses that one bit must follow another. This ensures that only one communication channel is needed rather than n to convey data between two collaborative devices. The main advantage of serial over the parallel transmission is that due to its one communication channel feature, it is capable of reducing the cost of transmission by roughly a factor of n when compared to parallel transmission. The serial transmission occurs in one of two ways; synchronous or asynchronous. The asynchronous transmission is advantageous in that it is cheap and effective, and this make it attractive low-speed communication choices (Burd, 2011). Further, the synchronous transmission has speed. This is ensured by the fact that there are no extra bits or gaps required to introduce at the sending end and remove at the receiving end (n.a. ).
b. Performance of processing interrupts has three mechanisms. The leading factor is the amount of time taken between receiving an interrupt request from the processor and the processor taking action to trigger processing of the interrupt service routine. This interruption is referred to as interrupt latency. The second interrupt factor is the interrupt processing time. It denotes the amount of time that the processor spends when practically saving the machine state of the interrupted task and diverting the interrupt service routine execution. The quantity of machine state saved is normally small, on the assumption that the interrupt operation. The last element of interrupt service performance is the state saving overhead. This underlines the amount of time consumed when saving machine registers, but which must be saved so that the interrupt service routine to do its job.
c. Caching is beneficial in several ways including: latency is abridged for active data resulting in higher application performance levels. Further, the I/O operations to external storage are minimized because most the I/O is diverted to cache. Subsequently this leads to lower levels of SAN traffic and disagreement.
a. It is necessary that it be first stored in main memory for a program to be executed. Subsequently, after the program has been loaded into the memory, a program execution begins its start address to the CPU via the distribution, which then sends instruction address to the memory unit. The management of all these issues, an operating system, should provide a unit commonly referred to as memory manager. The managing memory is obligated to request associated rights from the end users request for the execution of the program, up until its termination.
b. The Advantages of Virtual Memory Management Include:
- Paged Memory Allocation– the benefit is that it allows tasks to be allocated in non-contiguous memory locations. As such, the memory is used more efficiently.
- Demand Paging: under this benefit, tasks are no longer restricted by the size of physical memory. Accordingly, the memory is utilized more efficiently than the conventional schemes
- Segmented/Demand Paged Memory Allocation: As such, this ensures large virtual memory and the segment loaded on demand. (Sabido, 2014)
c. The Implementation of Integrated Time-Driven Scheduler
The implementation of integrated time-driven scheduler (ITDS) for Real-Time Mach scheduler must satisfy some requirements. The lead requirement is that this scheduler needs to follow multiple scheduling policies, because there are no standardized scheduling policies that suit all applications. As such, the scheduler crossing point should support the scheduling policies changing interface vigorously. The second requirement demands that ITDS scheduler in Real-Time Mach ought to support multiprocessors. In this case, the scheduler must support both single processors and multiprocessors. The final requirement relates on how to implement multifaceted scheduling policies ( Nakajima & Tokuda, 2009). In summary, these include Process creation; Process execution; Page fault time and the Process termination time.
d. Advantages of Threads Switching Over Processes Switching
- Context Switching: Threads are relatively cheap to create and destroy, as well as to represent. For instance, they require some storage space, the PC, SP, and the all-purpose registers. Nonetheless, they do not need any space to share memory information. Surprisingly, even with the little framework, it is relatively faster to shift between threads. Therefore, this means that it is moderately easier for a framework switch using threads.
- Sharing: It is beneficial from the fact that treads allow the sharing of many resources that cannot be shared in the process. Such resources include; sharing code section, data section, and OS resources in the same manner to open files.
- Burd, S. D. (2011). Systems architecture: Hardware and software in business information systems. Boston, MA: Course Technology, Cengage Learning.
- Nakajima , T., & Tokuda, H. (2009). Design and Implementation of Real-Time Scheduler in Real-Time Mach. Pittsburgh, Pennsylvania: Carnegie Mel lon University.
- King, S., Dunlap, G., & Chen, P. (2003). Operating System Support for Virtual Machines. Proceedings of the 2003 USENIX Technical Conference (pp. 72-82). Michigan: University of Michigan.
- Merrill, T. (2014, April). Cloud Computing: Is your Company Weighing both Benefits and Risks. Retrieved from Acegroup: http://www.acegroup.com/us-en/assets/privacy-network-security-cloud-computing-is-your-company-weighing-both-benefits-risks.pdf
- Sabido, M. (2014). Advantages and Disadvantages of Virtual Memory Management Schemes. Scrib.