Posts

Showing posts from 2016

Malicious Software Removal Tool

Microsoft Windows Malicious Software Removal Tool is a freely distributed virus removal tool developed by Microsoft for the Microsoft Windows operating system. First released on January 13, 2005, it is an on-demand anti-virus tool ("on-demand" means it lacks real-time protection) that scans the computer for specific widespread malware and tries to eliminate the infection. It is automatically distributed to Microsoft Windows computers via the Windows Update service but can also be separately downloaded.  The program is usually updated on the second Tuesday of every month (commonly called "Patch Tuesday") and distributed via Windows Update, at which point it runs once automatically in the background and reports if malicious software is found. Alternatively, users can manually download this tool from the Microsoft Download Center. It records its results in a log file located at %windir%\debug\mrt.log. To run it manually at other times, users can start "m

Security governance

The Software Engineering Institute at Carnegie Mellon University, in a publication titled "Governing for Enterprise Security (GES)", defines characteristics of effective security governance. These include: An enterprise-wide issue Leaders are accountable Viewed as a business requirement Risk-based Roles, responsibilities, and segregation of duties defined Addressed and enforced in policy Adequate resources committed Staff aware and trained A development life cycle requirement Planned, managed, measurable, and measured Reviewed and audited

Information security culture

Employee’s behavior has a big impact to information security in organizations. Cultural concept can help different segments of the organization to concern about the information security within the organization.″Exploring the Relationship between Organizational Culture and Information Security Culture″ provides the following definition of information security culture: ″ISC is the totality of patterns of behavior in an organization that contribute to the protection of information of all kinds. Information security culture needs to be improved continuously. In ″Information Security Culture from Analysis to Change″, authors commented, ″It′s a never ending process, a cycle of evaluation and change or maintenance.″ To manage the information security culture, five steps should be taken: Pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation. Pre-Evaluation :to identify the awareness of information security within employees and to analysis c

Sources of standards

International Organization for Standardization (ISO) is a consortium of national standards institutes from 157 countries, coordinated through a secretariat in Geneva, Switzerland. ISO is the world's largest developer of standards. ISO 15443: "Information technology - Security techniques - A framework for IT security assurance", ISO/IEC 27002: "Information technology - Security techniques - Code of practice for information security management", ISO-20000: "Information technology - Service management", and ISO/IEC 27001: "Information technology - Security techniques - Information security management systems - Requirements" are of particular interest to information security professionals. The US National Institute of Standards and Technology (NIST) is a non-regulatory federal agency within the U.S. Department of Commerce. The NIST Computer Security Division develops standards, metrics, tests and validation programs as well as publishes

Cryptography

Information security uses cryptography to transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is called encryption. Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user, who possesses the cryptographic key, through the process of decryption. Cryptography is used in information security to protect information from unauthorized or accidental disclosure while the information is in transit (either electronically or physically) and while information is in storage. Cryptography provides information security with other useful applications as well including improved authentication methods, message digests, digital signatures, non-repudiation, and encrypted network communications. Older less secure applications such as telnet and ftp are slowly being replaced with more secure applications such as ssh that use encrypted network communication

Risk management

The Certified Information Systems Auditor (CISA) Review Manual 2006 provides the following definition of risk management: "Risk management is the process of identifying vulnerabilities and threats to the information resources used by an organization in achieving business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization." There are two things in this definition that may need some clarification. First, the process of risk management is an ongoing, iterative process. It must be repeated indefinitely. The business environment is constantly changing and new threats and vulnerabilities emerge every day. Second, the choice of countermeasures (controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected. Risk analysis and risk evaluatio

Non-repudiation

In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction nor can the other party deny having sent a transaction. It is important to note that while technology such as cryptographic systems can assist in non-repudiation efforts, the concept is at its core a legal concept transcending the realm of technology. It is not, for instance, sufficient to show that the message matches a digital signature signed with the sender's private key, and thus only the sender could have sent the message and nobody else could have altered it in transit. The alleged sender could in return demonstrate that the digital signature algorithm is vulnerable or flawed, or allege or prove that his signing key has been compromised. The fault for these violations may or may not lie with the sender himself, and such assertions may or may not relieve the sender of liability, but

Integrity

In information security, data integrity means maintaining and assuring the accuracy and completeness of data over its entire life-cycle.This means that data cannot be modified in an unauthorized or undetected manner. This is not the same thing as referential integrity in databases, although it can be viewed as a special case of consistency as understood in the classic ACID model of transaction processing. Information security systems typically provide message integrity in addition to data confidentiality.

Confidentiality,information security

In information security, confidentiality "is the property, that information is not made available or disclosed to unauthorized individuals, entities, or processes" (Except ISO27000).

Photo identification,photo ID

Photo identification or photo ID is an identity document that includes a photograph of the holder, usually only his or her face. The most commonly accepted forms of photo ID are those issued by government authorities, such as driver's licenses, identity cards and passports, but special-purpose photo IDs may be also produced, such as internal security or access control cards. Photo identification may be used for face-to-face authentication of identity of a party who either is personally unknown to the person in authority or because that person does not have access to a file, a directory, a registry or an information service that contains or that can render a photograph of somebody on account of that person's name and other personal information.

User account policy

A user account policy is a document which outlines the requirements for requesting and maintaining an account on computer systems or networks, typically within an organization. It is very important for large sites where users typically have accounts on many systems. Some sites have users read and sign an account policy as part of the account request process. Policy content Should state who has the authority to approve account requests. Should state who is allowed to use the resources (e.g., employees or students only) Should state any citizenship/resident requirements. Should state if users are allowed to share accounts or if users are allowed to have multiple accounts on a single host. Should state the users’ rights and responsibilities. Should state when the account should be disabled and archived. Should state how long the account can remain inactive before it is disabled. Should state password construction and aging rules.

Security policy

Security policy is a definition of what it means to be secure for a system, organization or other entity. For an organization, it addresses the constraints on behavior of its members as well as constraints imposed on adversaries by mechanisms such as doors, locks, keys and walls. For systems, the security policy addresses constraints on functions and flow among them, constraints on access by external systems and adversaries including programs and access to data by people. Significance If it is important to be secure, then it is important to be sure all of the security policy is enforced by mechanisms that are strong enough. There are many organized methodologies and risk assessment strategies to assure completeness of security policies and assure that they are completely enforced. In complex systems, such as information systems, policies can be decomposed into sub-policies to facilitate the allocation of security mechanisms to enforce sub-policies. However, this practice

Network security policy

A network security policy, or NSP, is a generic document that outlines rules for computer network access, determines how policies are enforced and lays out some of the basic architecture of the company security/ network security environment. The document itself is usually several pages long and written by a committee. A security policy goes far beyond the simple idea of "keep the bad guys out". It's a very complex document, meant to govern data access, web-browsing habits, use of passwords and encryption, email attachments and more. It specifies these rules for individuals or groups of individuals throughout the company. Security policy should keep the malicious users out and also exert control over potential risky users within your organization. The first step in creating a policy is to understand what information and services are available (and to which users), what the potential is for damage and whether any protection is already in place to prevent misuse.

Computer security,cybersecurity,IT security

          Computer security, also known as cybersecurity or IT security, is the protection of information systems from theft or damage to the hardware, the software, and to the information on them, as well as from disruption or misdirection of the services they provide. It includes controlling physical access to the hardware, as well as protecting against harm that may come via network access, data and code injection, and due to malpractice by operators, whether intentional, accidental, or due to them being tricked into deviating from secure procedures.          The field is of growing importance due to the increasing reliance on computer systems in most societies.Computer systems now include a very wide variety of "smart" devices, including smartphones, televisions and tiny devices as part of the Internet of Things – and networks include not only the Internet and private data networks, but also Bluetooth, Wi-Fi and other wireless networks.

Cyberspace Electronic Security Act,CESA

The Cyberspace Electronic Security Act of 1999 (CESA) is a bill proposed by the Clinton administration during the 106th United States Congress that enables the government to harvest keys used in encryption. The Cyberspace Electronic Security Act gives law enforcement the ability to gain access to encryption keys and cryptography methods. The initial version of this act enabled federal law enforcement agencies to secretly use monitoring, electronic capturing equipment and other technologies to access and obtain information. These provisions were later stricken from the act, although federal law enforcement agencies still have a significant degree of latitude to conduct investigations relating to electronic information. The act generated discussion about what capabilities should be allowed to law enforcement in the detection of criminal activity. After vocal objections from civil liberties groups, the administration backed away from the controversial bill.

Internet security products

Antivirus  Antivirus software and Internet security programs can protect a programmable device from attack by detecting and eliminating viruses; Antivirus software was mainly shareware in the early years of the Internet, but there are now several free security applications on the Internet to choose from for all platforms. Password managers  A password manager is a software application that helps a user store and organize passwords. Password managers usually store passwords encrypted, requiring the user to create a master password; a single, ideally very strong password which grants the user access to their entire password database. Security suites  So called security suites were first offered for sale in 2003 (McAfee) and contain a suite of firewalls, anti-virus, anti-spyware and more.They may now offer theft protection, portable storage device safety check, private Internet browsing, cloud anti-spam, a file shredder or make security-related decisions (answering p

Types of firewall

Packet filter          A packet filter is a first generation firewall that processes network traffic on a packet-by-packet basis. Its main job is to filter traffic from a remote IP host, so a router is needed to connect the internal network to the Internet. The router is known as a screening router, which screens packets leaving and entering the network. Stateful packet inspection         In a stateful firewall the circuit-level gateway is a proxy server that operates at the network level of an Open Systems Interconnection (OSI) model and statically defines what traffic will be allowed. Circuit proxies will forward Network packets (formatted unit of data) containing a given port number, if the port is permitted by the algorithm. The main advantage of a proxy server is its ability to provide Network Address Translation (NAT), which can hide the user's IP address from the Internet, effectively protecting all internal information from the Internet. Application-level gate

Firewalls

Firewalls A computer firewall controls access between networks. It generally consists of gateways and filters which vary from one firewall to another. Firewalls also screen network traffic and are able to block traffic that is dangerous. Firewalls act as the intermediate server between SMTP and Hypertext Transfer Protocol (HTTP) connections. Role of firewalls in web security Firewalls impose restrictions on incoming and outgoing Network packets to and from private networks. Incoming or outgoing traffic must pass through the firewall; only authorized traffic is allowed to pass through it. Firewalls create checkpoints between an internal private network and the public Internet, also known as choke points(borrowed from the identical military term of a combat limiting geographical feature). Firewalls can create choke points based on IP source and TCP port number. They can also serve as the platform for IPsec. Using tunnel mode capability, firewall can be used to implement VPNs.

Message Authentication Code,encrypt,authenticity

A Message authentication code (MAC) is a cryptography method that uses a secret key to encrypt a message. This method outputs a MAC value that can be decrypted by the receiver, using the same secret key used by the sender. The Message Authentication Code protects both a message's data integrity as well as its authenticity.

Pretty Good Privacy

Pretty Good Privacy provides confidentiality by encrypting messages to be transmitted or data files to be stored using an encryption algorithm such as Triple DES or CAST-128. Email messages can be protected by using cryptography in various ways, such as the following: Signing an email message to ensure its integrity and confirm the identity of its sender. Encrypting the body of an email message to ensure its confidentiality. Encrypting the communications between mail servers to protect the confidentiality of both message body and message header. The first two methods, message signing and message body encryption, are often used together; however, encrypting the transmissions between mail servers is typically used only when two organizations want to protect emails regularly sent between each other. For example, the organizations could establish a virtual private network (VPN) to encrypt the communications between their mail servers over the Internet. Unlike methods that can onl

Security token

              Some online sites offer customers the ability to use a six-digit code which randomly changes every 30–60 seconds on a security token. The keys on the security token have built in mathematical computations and manipulate numbers based on the current time built into the device. This means that every thirty seconds there is only a certain array of numbers possible which would be correct to validate access to the online account. The website that the user is logging into would be made aware of that devices' serial number and would know the computation and correct time built into the device to verify that the number given is indeed one of the handful of six-digit numbers that works in that given 30-60 second cycle. After 30–60 seconds the device will present a new random six-digit number which can log into the website.

Application security

Application security encompasses measures taken throughout the code's life-cycle to prevent gaps in the security policy of an application or the underlying system (vulnerabilities) through flaws in the design, development, deployment, upgrade, or maintenance of the application. Applications only control the kind of resources granted to them, and not which resources are granted to them. They, in turn, determine the use of these resources by users of the application through application security.

Phishing

Phishing is the attempt to acquire sensitive information such as usernames, passwords, and credit card details (and sometimes, indirectly, money), often for malicious reasons, by masquerading as a trustworthy entity in an electronic communication The word is a neologism created as a homophone of fishing due to the similarity of using a bait in an attempt to catch a victim. Communications purporting to be from popular social web sites, auction sites, banks, online payment processors or IT administrators are commonly used to lure unsuspecting victims. Phishing emails may contain links to websites that are infected with malware. Phishing is typically carried out by email spoofing or instant messaging,  and it often directs users to enter details at a fake website whose look and feel are almost identical to the legitimate one. Phishing is an example of social engineering techniques used to deceive users, and exploits the poor usability of current web security technologies. Attempts to

Denial-of-service attacks

A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer resource unavailable to its intended users. Although the means to carry out, motives for, and targets of a DoS attack may vary, it generally consists of the concerted efforts to prevent an Internet site or service from functioning efficiently or at all, temporarily or indefinitely. According to businesses who participated in an international business security survey, 25% of respondents experienced a DoS attack in 2007 and 16.8% experienced one in 2010.

Malicious software,Malware

A computer user can be tricked or forced into downloading software onto a computer that is of malicious intent. Such software comes in many forms, such as viruses, Trojan horses, spyware, and worms. Malware, short for malicious software, is any software used to disrupt computer operation, gather sensitive information, or gain access to private computer systems. Malware is defined by its malicious intent, acting against the requirements of the computer user, and does not include software that causes unintentional harm due to some deficiency. The term badware is sometimes used, and applied to both true (malicious) malware and unintentionally harmful software.       A botnet is a network of zombie computers that have been taken over by a robot or bot that performs large-scale malicious acts for the creator of the botnet.        Computer Viruses are programs that can replicate their structures or effects by infecting other files or structures on a computer. The common use

Internet security,encryption

        Internet security is a branch of computer security specifically related to the Internet, often involving browser security but also network security on a more general level as it applies to other applications or operating systems on a whole. Its objective is to establish rules and measures to use against attacks over the Internet. The Internet represents an insecure channel for exchanging information leading to a high risk of intrusion or fraud, such as phishing. Different methods have been used to protect the transfer of data, including encryption and from-the-ground-up engineering.

VPN,routers

With the increasing use of VPNs, many have started deploying VPN connectivity on routers for additional security and encryption of data transmission by using various cryptographic techniques. Setting up VPN services on a router allows any connected device to use the VPN network while it is enabled. This also creates VPN services on devices that do not have native VPN clients such as smart-TVs, gaming consoles etc. Many router manufacturers, such as Cisco, Linksys, Asus, and Netgear supply their routers with built-in VPN clients.

VPNs in mobile environments

Mobile virtual private networks are used in settings where an endpoint of the VPN is not fixed to a single IP address, but instead roams across various networks such as data networks from cellular carriers or between multiple Wi-Fi access points. Mobile VPNs have been widely used in public safety, where they give law enforcement officers access to mission-critical applications, such as computer-assisted dispatch and criminal databases, while they travel between different subnets of a mobile network. They are also used in field service management and by healthcare organizations, among other industries. Increasingly, mobile VPNs are being adopted by mobile professionals who need reliable connections. They are used for roaming seamlessly across networks and in and out of wireless coverage areas without losing application sessions or dropping the secure VPN session. A conventional VPN can not withstand such events because the network tunnel is disrupted, causing applications to d

Routing

Tunnelling protocols can operate in a point-to-point network topology that would theoretically not be considered as a VPN, because a VPN by definition is expected to support arbitrary and changing sets of network nodes. But since most router implementations support a software-defined tunnel interface, customer-provisioned VPNs often are simply defined tunnels running conventional routing protocols. Provider-provisioned VPN building-blocks        Depending on whether a provider-provisioned VPN (PPVPN)operates in layer 2 or layer 3, the building blocks described below may be L2 only, L3 only, or combine them both. Multi-protocol label switching (MPLS) functionality blurs the L2-L3 identity. RFC 4026 generalized the following terms to cover L2 and L3 VPNs, but they were introduced in RFC 2547. More information on the devices below can also be found in Lewis, Cisco Press. Customer (C) devices A device that is within a customer's network and not directly connecte

Virtual private network

A virtual private network (VPN) extends a private network across a public network, such as the Internet. It enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network, and thus are benefiting from the functionality, security and management policies of the private network. A VPN is created by establishing a virtual point-to-point connection through the use of dedicated connections, virtual tunnelling protocols, or traffic encryption. A VPN spanning the Internet is similar to a wide area network (WAN). From a user perspective, the extended network resources are accessed in the same way as resources available within the private network. Traditional VPNs are characterized by a point-to-point topology, and they do not tend to support or connect broadcast domains. Therefore, communication, software, and networking, which are based on OSI layer 2 and broadcast packets, such as NetBIOS used in W

Hardware virtualization techniques

This approach is described as full virtualization of the hardware, and can be implemented using a type 1 or type 2 hypervisor: a type 1 hypervisor runs directly on the hardware, and a type 2 hypervisor runs on another operating system, such as Linux or Windows. Each virtual machine can run any operating system supported by the underlying hardware. Users can thus run two or more different "guest" operating systems simultaneously, in separate "private" virtual computers. The pioneer system using this concept was IBM's CP-40, the first (1967) version of IBM's CP/CMS (1967–1972) and the precursor to IBM's LPAR VM family (1972–present). With the VM architecture, most users run a relatively simple interactive computing single-user operating system, CMS, as a "guest" on top of the VM control program (VM-CP). This approach kept the CMS design simple, as if it were running alone; the control program quietly provides multitasking and resource m

Process virtual machines

        A process VM, sometimes called an application virtual machine, or Managed Runtime Environment (MRE), runs as a normal application inside a host OS and supports a single process. It is created when that process is started and destroyed when it exits. Its purpose is to provide a platform-independent programming environment that abstracts away details of the underlying hardware or operating system, and allows a program to execute in the same way on any platform. A process VM provides a high-level abstraction – that of a high-level programming language (compared to the low-level ISA abstraction of the system VM). Process VMs are implemented using an interpreter; performance comparable to compiled programming languages can be achieved by the use of just-in-time compilation. This type of VM has become popular with the Java programming language, which is implemented using the Java virtual machine. Other examples include the Parrot virtual machine, and the .NET Framework,

System virtual machines

System virtual machine advantages Multiple OS environments can co-exist on the same primary hard drive, with a virtual partition that allows sharing of files generated in either the "host" operating system or "guest" virtual environment. Adjunct software installations, wireless connectivity, and remote replication, such as printing and faxing, can be generated in any of the guest or host operating systems. Regardless of the system, all files are stored on the hard drive of the host OS. Application provisioning, maintenance, high availability and disaster recovery are inherent in the virtual machine software selected. Can provide emulated hardware environments different from the host's instruction set architecture (ISA), through emulation or by using just-in-time compilation. The main disadvantages of VMs are: A virtual machine is less efficient than an actual machine when it accesses the host hard drive indirectly. When multiple VMs are concurren

Virtual machine,VM

         In computing, a virtual machine (VM) is an emulation of a particular computer system. Virtual machines operate based on the computer architecture and functions of a real or hypothetical computer, and their implementations may involve specialized hardware, software, or a combination of both.        Various different kinds of virtual machines exist, each with different functions. System virtual machines (also known as full virtualization VMs) provide a complete substitute for the targeted real machine and a level of functionality required for the execution of a complete operating system. A hypervisor uses native execution to share and manage hardware, allowing multiple different environments, isolated from each other, to be executed on the same physical machine. Modern hypervisors use hardware-assisted virtualization, which provides efficient and full virtualization by using virtualization-specific hardware capabilities, primarily from the host CPUs. Process virtual machin

Unix shell

Unix shell A Unix shell is a command-line interpreter or shell that provides a traditional Unix-like command line user interface. Users direct the operation of the computer by entering commands as text for a command line interpreter to execute, or by creating text scripts of one or more such commands. Users typically interact with a Unix shell using a terminal emulator, however, direct operation via serial hardware connections, or networking session, are common for server systems. All Unix shells provide filename wildcarding, piping, here documents, command substitution, variables and control structures for condition-testing and iteration. Concept The most generic sense of the term shell means any program that users employ to type commands. A shell hides the details of the underlying operating system and manages the technical details of the operating system kernel interface, which is the lowest-level, or "inner-most" component of most operating systems. In

Hypervisor

       A hypervisor or virtual machine monitor (VMM) is a piece of computer software, firmware or hardware that creates and runs virtual machines.         A computer on which a hypervisor is running one or more virtual machines is defined as a host machine. Each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources. Classification        In their 1974 article "Formal Requirements for Virtualizable Third Generation Architectures" Gerald J. Popek and Robert P. Goldberg classified two types of hypervisor: Type-1, native or bare-metal hypervisors        These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. For this reason, they are sometimes called bare metal hyperv

Logical partition

A logical partition, commonly called an LPAR, is a subset of computer's hardware resources, virtualized as a separate computer. In effect, a physical machine can be partitioned into multiple logical partitions, each hosting a separate operating system. History IBM developed the concept of hypervisors (virtual machines in CP-40 and CP-67) and in 1972 provided it for the S/370 as Virtual Machine Facility/370. IBM introduced the Start Interpretive Execution (SIE) instruction (designed specifically for the execution of virtual machines) as part of 370-XA architecture on the 3081, as well as VM/XA versions of VM to exploit it. PR/SM is a type-1 Hypervisor based on the CP component of VM/XA that runs directly on the machine level and allocates system resources across LPARs to share physical resources. It is a standard feature on IBM System z, System p, and System i machines. The terms PR/SM and LPAR are often used interchangeably, including in IBM documentation. Formally, L

Oracle VM Server for SPARC

          Logical Domains (LDoms or LDOM) is the server virtualization and partitioning technology for SPARC V9 processors. It was first released by Sun Microsystems in April 2007. After the Oracle acquisition of Sun in January 2010, the product has been re-branded as Oracle VM Server for SPARC from version 2.0 onwards.          Each domain is a full virtual machine with a reconfigurable subset of hardware resources. Domains can be securely live migrated between servers while running. Operating systems running inside Logical Domains can be started, stopped, and rebooted independently. A running domain can be dynamically reconfigured to add or remove CPUs, RAM, or I/O devices without requiring a reboot.

Secure Shell

          Secure Shell, or SSH, is a cryptographic (encrypted) network protocol to allow remote login and other network services to operate securely over an unsecured network.         SSH provides a secure channel over an unsecured network in a client-server architecture, connecting an SSH client application with an SSH server. Common applications include remote command-line login and remote command execution, but any network service can be secured with SSH. The protocol specification distinguishes between two major versions, referred to as SSH-1 and SSH-2.          The most visible application of the protocol is for access to shell accounts on Unix-like operating systems, but it sees some limited use on Windows as well. In 2015, Microsoft announced that they would include native support for SSH in a future release.                SSH was designed as a replacement for Telnet and for unsecured remote shell protocols such as the Berkeley rlogin, rsh, and rexec protocols. Th

Remote Desktop Protocol

        Remote Desktop Protocol (RDP) is a proprietary protocol developed by Microsoft, which provides a user with a graphical interface to connect to another computer over a network connection. The user employs RDP client software for this purpose, while the other computer must run RDP server software.         Clients exist for most versions of Microsoft Windows (including Windows Mobile), Linux, Unix, OS X, iOS, Android, and other operating systems. RDP servers are built into Windows operating systems; an RDP server for Unix and OS X also exists. By default, the server listens on TCP port 3389 and UDP port 3389.           Microsoft currently refers to their official RDP server software as Remote Desktop Connection, formerly "Terminal Services Client".           The protocol is an extension of the ITU-T T.128 application sharing protocol.

Paravirtualization

        In computing, paravirtualization is a virtualization technique that presents a software interface to virtual machines that is similar, but not identical to that of the underlying hardware.       The intent of the modified interface is to reduce the portion of the guest's execution time spent performing operations which are substantially more difficult to run in a virtual environment compared to a non-virtualized environment. The paravirtualization provides specially defined 'hooks' to allow the guest(s) and host to request and acknowledge these tasks, which would otherwise be executed in the virtual domain (where execution performance is worse). A successful paravirtualized platform may allow the virtual machine monitor (VMM) to be simpler (by relocating execution of critical tasks from the virtual domain to the host domain), and/or reduce the overall performance degradation of machine-execution inside the virtual-guest.         Paravirtualization requires

Operating-system-level virtualization,Flexibility

Operating-system-level virtualization is a server-virtualization method where the kernel of an operating system allows for multiple isolated user-space instances, instead of just one. Such instances (sometimes called containers, software containers, virtualization engines (VE), virtual private servers (VPS), or jails) may look and feel like a real server from the point of view of its owners and users. On Unix-like operating systems, one can see this technology as an advanced implementation of the standard chroot mechanism. In addition to isolation mechanisms, the kernel often provides resource-management features to limit the impact of one container's activities on other containers. Uses Operating-system-level virtualization is commonly used in virtual hosting environments, where it is useful for securely allocating finite hardware resources amongst a large number of mutually-distrusting users. System administrators may also use it, to a lesser extent, for consolidati

Linux-VServer

      Linux-VServer is a virtual private server implementation that was created by adding operating system-level virtualization capabilities to the Linux kernel. It is developed and distributed as open-source software.         The project was started by Jacques Gélinas. It is now maintained by Herbert Pötzl of Austria and is not related to the Linux Virtual Server project, which implements network load balancing.        Linux-VServer is a jail mechanism in that it can be used to securely partition resources on a computer system (such as the file system, CPU time, network addresses and memory) in such a way that processes cannot mount a denial-of-service attack on anything outside their partition.         Each partition is called a security context, and the virtualized system within it is the virtual private server. A chroot-like utility for descending into security contexts is provided. Booting a virtual private server is then simply a matter of kickstarting init in a new

Hybrid server

          A hybrid hosting service or hybrid server is a type of Internet hosting which is a combination of a physically-hosted server with virtualization technology.           A hybrid server is a new kind of dedicated server that offers both the power of a classic dedicated server and the flexibility of cloud computing. On hybrid dedicated servers hardware are 100% allocated to user. The price is lower than for dedicated servers.          The server is separated into hybrid server environments using Red Hat KVM or any other virtualization. Each hybrid environment is securely isolated and has guaranteed resources available to it which ensures a high level of performance and responsiveness. A hybrid server combines all of the benefits of virtualization technology with the performance of a full dedicated server. So, an administrator can use automation to suspend, restart, or reinstall the operating system. One large server is split into a few (normally two) Hybrid servers.

Emulator

             In computing, an emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system Emulators in computing          Emulation refers to the ability of a computer program in an electronic device to emulate (imitate) another program or device. Many printers, for example, are designed to emulate Hewlett-Packard LaserJet printers because so much software is written for HP printers. If a non-HP printer emulates an HP printer, any software written for a real HP printer will also run in the non-HP printer emulation and produce equivalent printing.            A hardware emulator is an emulator which takes the form of a hardware device. Examples include the DOS-compatible card installed in some old-world Macintoshes like Centris 610 or Performa 630 that allowed them to run PC p