Wednesday, July 27, 2016

Malicious Software Removal Tool

Microsoft Windows Malicious Software Removal Tool is a freely distributed virus removal tool developed by Microsoft for the Microsoft Windows operating system. First released on January 13, 2005, it is an on-demand anti-virus tool ("on-demand" means it lacks real-time protection) that scans the computer for specific widespread malware and tries to eliminate the infection. It is automatically distributed to Microsoft Windows computers via the Windows Update service but can also be separately downloaded. 

The program is usually updated on the second Tuesday of every month (commonly called "Patch Tuesday") and distributed via Windows Update, at which point it runs once automatically in the background and reports if malicious software is found. Alternatively, users can manually download this tool from the Microsoft Download Center. It records its results in a log file located at %windir%\debug\mrt.log. To run it manually at other times, users can start "mrt.exe" using the Windows Command Prompt or Run command in the Start Menu. Since support for Windows 2000 ended on July 13, 2010, Microsoft stopped distributing the tool to Windows 2000 users via Windows Update. The last version of the tool that could run on Windows 2000 was 4.20, released on May 14, 2013. Starting with version 5.1, released on June 11, 2013, support for Windows 2000 was dropped altogether.

As released, the tool is configured to report anonymized data about infections to Microsoft if any are detected. The reporting behavior is disclosed in the tool's EULA, and can be disabled if desired. 

In a June 2006 Microsoft report, the company claimed that the tool had removed 16 million instances of malicious software from 5.7 million of 270 million total unique Windows computers since its release in January 2005. The report also stated that, on average, the tool removes malicious software from 1 in every 311 computers on which it runs. As of 19 May 2009 Microsoft claims that the software has removed password stealer threats from 859,842 machines. 

In August 2013, the Malicious Software Removal Tool deleted old, vulnerable versions of the Tor client, in order to end the spread of the Sefnit botnet (which mined for bitcoins without the host owner's approval and later engaged in click fraud). Approximately two million hosts had been cleaned by October, although this was slightly less than half of the estimated infections, the rest of the suspected machines presumably did not have their automatic Windows Updates enabled or manually run. 

Although Windows XP support ended on April 8, 2014, Microsoft announced that updates for the Windows XP version of the Malicious Software Removal Tool will be provided until July 14, 2015. 

Sunday, July 24, 2016

Security governance

The Software Engineering Institute at Carnegie Mellon University, in a publication titled "Governing for Enterprise Security (GES)", defines characteristics of effective security governance. These include:
  • An enterprise-wide issue
  • Leaders are accountable
  • Viewed as a business requirement
  • Risk-based
  • Roles, responsibilities, and segregation of duties defined
  • Addressed and enforced in policy
  • Adequate resources committed
  • Staff aware and trained
  • A development life cycle requirement
  • Planned, managed, measurable, and measured
  • Reviewed and audited


Thursday, July 21, 2016

Information security culture

Employee’s behavior has a big impact to information security in organizations. Cultural concept can help different segments of the organization to concern about the information security within the organization.″Exploring the Relationship between Organizational Culture and Information Security Culture″ provides the following definition of information security culture: ″ISC is the totality of patterns of behavior in an organization that contribute to the protection of information of all kinds.

Information security culture needs to be improved continuously. In ″Information Security Culture from Analysis to Change″, authors commented, ″It′s a never ending process, a cycle of evaluation and change or maintenance.″ To manage the information security culture, five steps should be taken: Pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.

  1. Pre-Evaluation:to identify the awareness of information security within employees and to analysis current security policy.
  2. Strategic Planning: to come up a better awareness-program, we need to set clear targets. Clustering people is helpful to achieve it.
  3. Operative Planning: we can set a good security culture based on internal communication, management-buy-in, and security awareness and training program.
  4. Implementation: four stages should be used to implement the information security culture. They are commitment of the management, communication with organizational members, courses for all organizational members, and commitment of the employees.


Sunday, July 17, 2016

Sources of standards

International Organization for Standardization (ISO) is a consortium of national standards institutes from 157 countries, coordinated through a secretariat in Geneva, Switzerland. ISO is the world's largest developer of standards. ISO 15443: "Information technology - Security techniques - A framework for IT security assurance", ISO/IEC 27002: "Information technology - Security techniques - Code of practice for information security management", ISO-20000: "Information technology - Service management", and ISO/IEC 27001: "Information technology - Security techniques - Information security management systems - Requirements" are of particular interest to information security professionals.

The US National Institute of Standards and Technology (NIST) is a non-regulatory federal agency within the U.S. Department of Commerce. The NIST Computer Security Division develops standards, metrics, tests and validation programs as well as publishes standards and guidelines to increase secure IT planning, implementation, management and operation. NIST is also the custodian of the US Federal Information Processing Standard publications (FIPS).

The Internet Society is a professional membership society with more than 100 organizations and over 20,000 individual members in over 180 countries. It provides leadership in addressing issues that confront the future of the Internet, and is the organization home for the groups responsible for Internet infrastructure standards, including the Internet Engineering Task Force (IETF) and the Internet Architecture Board (IAB). The ISOC hosts the Requests for Comments (RFCs) which includes the Official Internet Protocol Standards and the RFC-2196 Site Security Handbook.

The Information Security Forum is a global nonprofit organization of several hundred leading organizations in financial services, manufacturing, telecommunications, consumer goods, government, and other areas. It undertakes research into information security practices and offers advice in its biannual Standard of Good Practice and more detailed advisories for members.

The Institute of Information Security Professionals (IISP) is an independent, non-profit body governed by its members, with the principal objective of advancing the professionalism of information security practitioners and thereby the professionalism of the industry as a whole. The Institute developed the IISP Skills Framework. This framework describes the range of competencies expected of Information Security and Information Assurance Professionals in the effective performance of their roles. It was developed through collaboration between both private and public sector organisations and world-renowned academics and security leaders.

The German Federal Office for Information Security (in German Bundesamt für Sicherheit in der Informationstechnik (BSI)) BSI-Standards 100-1 to 100-4 are a set of recommendations including "methods, processes, procedures, approaches and measures relating to information security". The BSI-Standard 100-2 IT-Grundschutz Methodology describes how an information security management can be implemented and operated. The Standard includes a very specific guide, the IT Baseline Protection Catalogs (also known as IT-Grundschutz Catalogs). Before 2005 the catalogs were formerly known as "IT Baseline Protection Manual". The Catalogs are a collection of documents useful for detecting and combating security-relevant weak points in the IT environment (IT cluster). The collection encompasses as of September 2013 over 4.400 pages with the introduction and catalogs. The IT-Grundschutz approach is aligned with to the ISO/IEC 2700x family.

At the European Telecommunications Standards Institute a catalog of Information security indicators have been standardized by the Industrial Specification Group (ISG) ISI.

Thursday, July 14, 2016

Cryptography

Information security uses cryptography to transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is called encryption. Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user, who possesses the cryptographic key, through the process of decryption. Cryptography is used in information security to protect information from unauthorized or accidental disclosure while the information is in transit (either electronically or physically) and while information is in storage.

Cryptography provides information security with other useful applications as well including improved authentication methods, message digests, digital signatures, non-repudiation, and encrypted network communications. Older less secure applications such as telnet and ftp are slowly being replaced with more secure applications such as ssh that use encrypted network communications. Wireless communications can be encrypted using protocols such as WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU‑T G.hn) are secured using AES for encryption and X.1035 for authentication and key exchange. Software applications such as GnuPG or PGP can be used to encrypt data files and Email.

Cryptography can introduce security problems when it is not implemented correctly. Cryptographic solutions need to be implemented using industry accepted solutions that have undergone rigorous peer review by independent experts in cryptography. The length and strength of the encryption key is also an important consideration. A key that is weak or too short will produce weak encryption. The keys used for encryption and decryption must be protected with the same degree of rigor as any other confidential information. They must be protected from unauthorized disclosure and destruction and they must be available when needed. Public key infrastructure (PKI) solutions address many of the problems that surround key management.

Monday, July 11, 2016

Risk management

The Certified Information Systems Auditor (CISA) Review Manual 2006 provides the following definition of risk management: "Risk management is the process of identifying vulnerabilities and threats to the information resources used by an organization in achieving business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization."

There are two things in this definition that may need some clarification. First, the process of risk management is an ongoing, iterative process. It must be repeated indefinitely. The business environment is constantly changing and new threats and vulnerabilities emerge every day. Second, the choice of countermeasures (controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected.

Risk analysis and risk evaluation processes have their limitations since, when security incidents occur, they emerge in a context, and their rarity and even their uniqueness give rise to unpredictable threats. The analysis of these phenomena which are characterized by breakdowns, surprises and side-effects, requires a theoretical approach which is able to examine and interpret subjectively the detail of each incident.

Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man-made or act of nature) that has the potential to cause harm.

The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). It should be pointed out that it is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called "residual risk".

A risk assessment is carried out by a team of people who have knowledge of specific areas of the business. Membership of the team may vary over time as different parts of the business are assessed. The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may use quantitative analysis.

Thursday, July 7, 2016

Non-repudiation

In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction nor can the other party deny having sent a transaction.

It is important to note that while technology such as cryptographic systems can assist in non-repudiation efforts, the concept is at its core a legal concept transcending the realm of technology. It is not, for instance, sufficient to show that the message matches a digital signature signed with the sender's private key, and thus only the sender could have sent the message and nobody else could have altered it in transit. The alleged sender could in return demonstrate that the digital signature algorithm is vulnerable or flawed, or allege or prove that his signing key has been compromised. The fault for these violations may or may not lie with the sender himself, and such assertions may or may not relieve the sender of liability, but the assertion would invalidate the claim that the signature necessarily proves authenticity and integrity and thus prevents repudiation.

Monday, July 4, 2016

Integrity

In information security, data integrity means maintaining and assuring the accuracy and completeness of data over its entire life-cycle.This means that data cannot be modified in an unauthorized or undetected manner. This is not the same thing as referential integrity in databases, although it can be viewed as a special case of consistency as understood in the classic ACID model of transaction processing. Information security systems typically provide message integrity in addition to data confidentiality.

Friday, July 1, 2016

Confidentiality,information security

In information security, confidentiality "is the property, that information is not made available or disclosed to unauthorized individuals, entities, or processes" (Except ISO27000).

Monday, June 27, 2016

Photo identification,photo ID

Photo identification or photo ID is an identity document that includes a photograph of the holder, usually only his or her face. The most commonly accepted forms of photo ID are those issued by government authorities, such as driver's licenses, identity cards and passports, but special-purpose photo IDs may be also produced, such as internal security or access control cards.

Photo identification may be used for face-to-face authentication of identity of a party who either is personally unknown to the person in authority or because that person does not have access to a file, a directory, a registry or an information service that contains or that can render a photograph of somebody on account of that person's name and other personal information.

Friday, June 24, 2016

User account policy

A user account policy is a document which outlines the requirements for requesting and maintaining an account on computer systems or networks, typically within an organization. It is very important for large sites where users typically have accounts on many systems. Some sites have users read and sign an account policy as part of the account request process.

Policy content

  • Should state who has the authority to approve account requests.
  • Should state who is allowed to use the resources (e.g., employees or students only)
  • Should state any citizenship/resident requirements.
  • Should state if users are allowed to share accounts or if users are allowed to have multiple accounts on a single host.
  • Should state the users’ rights and responsibilities.
  • Should state when the account should be disabled and archived.
  • Should state how long the account can remain inactive before it is disabled.
  • Should state password construction and aging rules.

Tuesday, June 21, 2016

Security policy


Security policy is a definition of what it means to be secure for a system, organization or other entity. For an organization, it addresses the constraints on behavior of its members as well as constraints imposed on adversaries by mechanisms such as doors, locks, keys and walls. For systems, the security policy addresses constraints on functions and flow among them, constraints on access by external systems and adversaries including programs and access to data by people.

Significance
If it is important to be secure, then it is important to be sure all of the security policy is enforced by mechanisms that are strong enough. There are many organized methodologies and risk assessment strategies to assure completeness of security policies and assure that they are completely enforced. In complex systems, such as information systems, policies can be decomposed into sub-policies to facilitate the allocation of security mechanisms to enforce sub-policies. However, this practice has pitfalls. It is too easy to simply go directly to the sub-policies, which are essentially the rules of operation and dispense with the top level policy. That gives the false sense that the rules of operation address some overall definition of security when they do not. Because it is so difficult to think clearly with completeness about security, rules of operation stated as "sub-policies" with no "super-policy" usually turn out to be rambling rules that fail to enforce anything with completeness. Consequently, a top-level security policy is essential to any serious security scheme and sub-policies and rules of operation are meaningless without it.

Friday, June 17, 2016

Network security policy

A network security policy, or NSP, is a generic document that outlines rules for computer network access, determines how policies are enforced and lays out some of the basic architecture of the company security/ network security environment. The document itself is usually several pages long and written by a committee. A security policy goes far beyond the simple idea of "keep the bad guys out". It's a very complex document, meant to govern data access, web-browsing habits, use of passwords and encryption, email attachments and more. It specifies these rules for individuals or groups of individuals throughout the company.

Security policy should keep the malicious users out and also exert control over potential risky users within your organization. The first step in creating a policy is to understand what information and services are available (and to which users), what the potential is for damage and whether any protection is already in place to prevent misuse.

In addition, the security policy should dictate a hierarchy of access permissions; that is, grant users access only to what is necessary for the completion of their work.

While writing the security document can be a major undertaking, a good start can be achieved by using a template. National Institute for Standards and Technology provides a security-policy guideline.
The policies could be expressed as a set of instructions that could be understood by special purpose network hardware dedicated for securing the network.

Tuesday, June 14, 2016

Computer security,cybersecurity,IT security

          Computer security, also known as cybersecurity or IT security, is the protection of information systems from theft or damage to the hardware, the software, and to the information on them, as well as from disruption or misdirection of the services they provide. It includes controlling physical access to the hardware, as well as protecting against harm that may come via network access, data and code injection, and due to malpractice by operators, whether intentional, accidental, or due to them being tricked into deviating from secure procedures.

         The field is of growing importance due to the increasing reliance on computer systems in most societies.Computer systems now include a very wide variety of "smart" devices, including smartphones, televisions and tiny devices as part of the Internet of Things – and networks include not only the Internet and private data networks, but also Bluetooth, Wi-Fi and other wireless networks.


Saturday, June 11, 2016

Cyberspace Electronic Security Act,CESA

The Cyberspace Electronic Security Act of 1999 (CESA) is a bill proposed by the Clinton administration during the 106th United States Congress that enables the government to harvest keys used in encryption. The Cyberspace Electronic Security Act gives law enforcement the ability to gain access to encryption keys and cryptography methods. The initial version of this act enabled federal law enforcement agencies to secretly use monitoring, electronic capturing equipment and other technologies to access and obtain information. These provisions were later stricken from the act, although federal law enforcement agencies still have a significant degree of latitude to conduct investigations relating to electronic information. The act generated discussion about what capabilities should be allowed to law enforcement in the detection of criminal activity. After vocal objections from civil liberties groups, the administration backed away from the controversial bill.

Tuesday, June 7, 2016

Internet security products

Antivirus 
Antivirus software and Internet security programs can protect a programmable device from attack by detecting and eliminating viruses; Antivirus software was mainly shareware in the early years of the Internet, but there are now several free security applications on the Internet to choose from for all platforms.

Password managers 
A password manager is a software application that helps a user store and organize passwords. Password managers usually store passwords encrypted, requiring the user to create a master password; a single, ideally very strong password which grants the user access to their entire password database.

Security suites 
So called security suites were first offered for sale in 2003 (McAfee) and contain a suite of firewalls, anti-virus, anti-spyware and more.They may now offer theft protection, portable storage device safety check, private Internet browsing, cloud anti-spam, a file shredder or make security-related decisions (answering popup windows) and several were free of charge as of at least 2012.

Saturday, June 4, 2016

Types of firewall

Packet filter
         A packet filter is a first generation firewall that processes network traffic on a packet-by-packet basis. Its main job is to filter traffic from a remote IP host, so a router is needed to connect the internal network to the Internet. The router is known as a screening router, which screens packets leaving and entering the network.

Stateful packet inspection
        In a stateful firewall the circuit-level gateway is a proxy server that operates at the network level of an Open Systems Interconnection (OSI) model and statically defines what traffic will be allowed. Circuit proxies will forward Network packets (formatted unit of data) containing a given port number, if the port is permitted by the algorithm. The main advantage of a proxy server is its ability to provide Network Address Translation (NAT), which can hide the user's IP address from the Internet, effectively protecting all internal information from the Internet.

Application-level gateway
        An application-level firewall is a third generation firewall where a proxy server operates at the very top of the OSI model, the IP suite application level. A network packet is forwarded only if a connection is established using a known protocol. Application-level gateways are notable for analyzing entire messages rather than individual packets of data when the data are being sent or received.

Browser choice
      Web browser statistics tend to affect the amount a Web browser is exploited. For example, Internet Explorer 6, which used to own a majority of the Web browser market share, is considered extremely insecure because vulnerabilities were exploited due to its former popularity. Since browser choice is more evenly distributed (Internet Explorer at 28.5%, Firefox at 18.4%, Google Chrome at 40.8%, and so on) and vulnerabilities are exploited in many different browsers. 

Wednesday, June 1, 2016

Firewalls

Firewalls
A computer firewall controls access between networks. It generally consists of gateways and filters which vary from one firewall to another. Firewalls also screen network traffic and are able to block traffic that is dangerous. Firewalls act as the intermediate server between SMTP and Hypertext Transfer Protocol (HTTP) connections.

Role of firewalls in web security
Firewalls impose restrictions on incoming and outgoing Network packets to and from private networks. Incoming or outgoing traffic must pass through the firewall; only authorized traffic is allowed to pass through it. Firewalls create checkpoints between an internal private network and the public Internet, also known as choke points(borrowed from the identical military term of a combat limiting geographical feature). Firewalls can create choke points based on IP source and TCP port number. They can also serve as the platform for IPsec. Using tunnel mode capability, firewall can be used to implement VPNs. Firewalls can also limit network exposure by hiding the internal network system and information from the public Internet.

Friday, May 27, 2016

Message Authentication Code,encrypt,authenticity

A Message authentication code (MAC) is a cryptography method that uses a secret key to encrypt a message. This method outputs a MAC value that can be decrypted by the receiver, using the same secret key used by the sender. The Message Authentication Code protects both a message's data integrity as well as its authenticity.

Tuesday, May 24, 2016

Pretty Good Privacy

Pretty Good Privacy provides confidentiality by encrypting messages to be transmitted or data files to be stored using an encryption algorithm such as Triple DES or CAST-128. Email messages can be protected by using cryptography in various ways, such as the following:

  1. Signing an email message to ensure its integrity and confirm the identity of its sender.
  2. Encrypting the body of an email message to ensure its confidentiality.
  3. Encrypting the communications between mail servers to protect the confidentiality of both message body and message header.
  4. The first two methods, message signing and message body encryption, are often used together; however, encrypting the transmissions between mail servers is typically used only when two organizations want to protect emails regularly sent between each other. For example, the organizations could establish a virtual private network (VPN) to encrypt the communications between their mail servers over the Internet. Unlike methods that can only encrypt a message body, a VPN can encrypt entire messages, including email header information such as senders, recipients, and subjects. In some cases, organizations may need to protect header information. However, a VPN solution alone cannot provide a message signing mechanism, nor can it provide protection for email messages along the entire route from sender to recipient.


Saturday, May 21, 2016

Security token

              Some online sites offer customers the ability to use a six-digit code which randomly changes every 30–60 seconds on a security token. The keys on the security token have built in mathematical computations and manipulate numbers based on the current time built into the device. This means that every thirty seconds there is only a certain array of numbers possible which would be correct to validate access to the online account. The website that the user is logging into would be made aware of that devices' serial number and would know the computation and correct time built into the device to verify that the number given is indeed one of the handful of six-digit numbers that works in that given 30-60 second cycle. After 30–60 seconds the device will present a new random six-digit number which can log into the website.

Saturday, May 14, 2016

Application security

Application security encompasses measures taken throughout the code's life-cycle to prevent gaps in the security policy of an application or the underlying system (vulnerabilities) through flaws in the design, development, deployment, upgrade, or maintenance of the application.

Applications only control the kind of resources granted to them, and not which resources are granted to them. They, in turn, determine the use of these resources by users of the application through application security.

Wednesday, May 11, 2016

Phishing

Phishing is the attempt to acquire sensitive information such as usernames, passwords, and credit card details (and sometimes, indirectly, money), often for malicious reasons, by masquerading as a trustworthy entity in an electronic communication The word is a neologism created as a homophone of fishing due to the similarity of using a bait in an attempt to catch a victim. Communications purporting to be from popular social web sites, auction sites, banks, online payment processors or IT administrators are commonly used to lure unsuspecting victims. Phishing emails may contain links to websites that are infected with malware. Phishing is typically carried out by email spoofing or instant messaging,  and it often directs users to enter details at a fake website whose look and feel are almost identical to the legitimate one. Phishing is an example of social engineering techniques used to deceive users, and exploits the poor usability of current web security technologies. Attempts to deal with the growing number of reported phishing incidents include legislation, user training, public awareness, and technical security measures. Many websites have now created secondary tools for applications, like maps for games, but they should be clearly marked as to who wrote them, and users should not use the same passwords anywhere on the internet.

Phishing is a continual threat, and the risk is even larger in social media such as Facebook, Twitter, and Google+. Hackers could create a clone of a website and tell you to enter personal information, which is then emailed to them. Hackers commonly take advantage of these sites to attack people using them at their workplace, homes, or in public in order to take personal and security information that can affect the user or company (if in a workplace environment). Phishing takes advantage of the trust that the user may have since the user may not be able to tell that the site being visited, or program being used, is not real; therefore, when this occurs, the hacker has the chance to gain the personal information of the targeted user, such as passwords, usernames, security codes, and credit card numbers, among other things.

Saturday, May 7, 2016

Denial-of-service attacks

A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer resource unavailable to its intended users. Although the means to carry out, motives for, and targets of a DoS attack may vary, it generally consists of the concerted efforts to prevent an Internet site or service from functioning efficiently or at all, temporarily or indefinitely. According to businesses who participated in an international business security survey, 25% of respondents experienced a DoS attack in 2007 and 16.8% experienced one in 2010.

Wednesday, May 4, 2016

Malicious software,Malware

A computer user can be tricked or forced into downloading software onto a computer that is of malicious intent. Such software comes in many forms, such as viruses, Trojan horses, spyware, and worms.

Malware, short for malicious software, is any software used to disrupt computer operation, gather sensitive information, or gain access to private computer systems. Malware is defined by its malicious intent, acting against the requirements of the computer user, and does not include software that causes unintentional harm due to some deficiency. The term badware is sometimes used, and applied to both true (malicious) malware and unintentionally harmful software.

      A botnet is a network of zombie computers that have been taken over by a robot or bot that performs large-scale malicious acts for the creator of the botnet.

       Computer Viruses are programs that can replicate their structures or effects by infecting other files or structures on a computer. The common use of a virus is to take over a computer to steal data.
Computer worms are programs that can replicate themselves throughout a computer network, performing malicious tasks throughout.

Ransomware is a type of malware which restricts access to the computer system that it infects, and demands a ransom paid to the creator(s) of the malware in order for the restriction to be removed.
Scareware is scam software with malicious payloads, usually of limited or no benefit, that are sold to consumers via certain unethical marketing practices. The selling approach uses social engineering to cause shock, anxiety, or the perception of a threat, generally directed at an unsuspecting user.
Spyware refers to programs that surreptitiously monitor activity on a computer system and report that information to others without the user's consent.

A Trojan horse, commonly known as a Trojan, is a general term for malicious software that pretends to be harmless, so that a user willingly allows it to be downloaded onto the computer.

Sunday, May 1, 2016

Internet security,encryption

        Internet security is a branch of computer security specifically related to the Internet, often involving browser security but also network security on a more general level as it applies to other applications or operating systems on a whole. Its objective is to establish rules and measures to use against attacks over the Internet. The Internet represents an insecure channel for exchanging information leading to a high risk of intrusion or fraud, such as phishing. Different methods have been used to protect the transfer of data, including encryption and from-the-ground-up engineering.

Wednesday, April 27, 2016

VPN,routers

With the increasing use of VPNs, many have started deploying VPN connectivity on routers for additional security and encryption of data transmission by using various cryptographic techniques. Setting up VPN services on a router allows any connected device to use the VPN network while it is enabled. This also creates VPN services on devices that do not have native VPN clients such as smart-TVs, gaming consoles etc.

Many router manufacturers, such as Cisco, Linksys, Asus, and Netgear supply their routers with built-in VPN clients.

Sunday, April 24, 2016

VPNs in mobile environments


Mobile virtual private networks are used in settings where an endpoint of the VPN is not fixed to a single IP address, but instead roams across various networks such as data networks from cellular carriers or between multiple Wi-Fi access points. Mobile VPNs have been widely used in public safety, where they give law enforcement officers access to mission-critical applications, such as computer-assisted dispatch and criminal databases, while they travel between different subnets of a mobile network. They are also used in field service management and by healthcare organizations, among other industries.

Increasingly, mobile VPNs are being adopted by mobile professionals who need reliable connections. They are used for roaming seamlessly across networks and in and out of wireless coverage areas without losing application sessions or dropping the secure VPN session. A conventional VPN can not withstand such events because the network tunnel is disrupted, causing applications to disconnect, time out, or fail, or even cause the computing device itself to crash.

Instead of logically tying the endpoint of the network tunnel to the physical IP address, each tunnel is bound to a permanently associated IP address at the device. The mobile VPN software handles the necessary network authentication and maintains the network sessions in a manner transparent to the application and the user. The Host Identity Protocol (HIP), under study by the Internet Engineering Task Force, is designed to support mobility of hosts by separating the role of IP addresses for host identification from their locator functionality in an IP network. With HIP a mobile host maintains its logical connections established via the host identity identifier while associating with different IP addresses when roaming between access networks.

Thursday, April 21, 2016

Routing

Tunnelling protocols can operate in a point-to-point network topology that would theoretically not be considered as a VPN, because a VPN by definition is expected to support arbitrary and changing sets of network nodes. But since most router implementations support a software-defined tunnel interface, customer-provisioned VPNs often are simply defined tunnels running conventional routing protocols.

Provider-provisioned VPN building-blocks
       Depending on whether a provider-provisioned VPN (PPVPN)operates in layer 2 or layer 3, the building blocks described below may be L2 only, L3 only, or combine them both. Multi-protocol label switching (MPLS) functionality blurs the L2-L3 identity.

RFC 4026 generalized the following terms to cover L2 and L3 VPNs, but they were introduced in RFC 2547. More information on the devices below can also be found in Lewis, Cisco Press.

Customer (C) devices
A device that is within a customer's network and not directly connected to the service provider's network. C devices are not aware of the VPN.

Customer Edge device (CE)
A device at the edge of the customer's network which provides access to the PPVPN. Sometimes it's just a demarcation point between provider and customer responsibility. Other providers allow customers to configure it.

Provider edge device (PE)
A PE is a device, or set of devices, at the edge of the provider network which connects to customer networks through CE devices and presents the provider's view of the customer site. PEs are aware of the VPNs that connect through them, and maintain VPN state.

Provider device (P)
A P device operates inside the provider's core network and does not directly interface to any customer endpoint. It might, for example, provide routing for many provider-operated tunnels that belong to different customers' PPVPNs. While the P device is a key part of implementing PPVPNs, it is not itself VPN-aware and does not maintain VPN state. Its principal role is allowing the service provider to scale its PPVPN offerings, for example, by acting as an aggregation point for multiple PEs. P-to-P connections, in such a role, often are high-capacity optical links between major locations of providers.

Sunday, April 17, 2016

Virtual private network

A virtual private network (VPN) extends a private network across a public network, such as the Internet. It enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network, and thus are benefiting from the functionality, security and management policies of the private network. A VPN is created by establishing a virtual point-to-point connection through the use of dedicated connections, virtual tunnelling protocols, or traffic encryption.

A VPN spanning the Internet is similar to a wide area network (WAN). From a user perspective, the extended network resources are accessed in the same way as resources available within the private network. Traditional VPNs are characterized by a point-to-point topology, and they do not tend to support or connect broadcast domains. Therefore, communication, software, and networking, which are based on OSI layer 2 and broadcast packets, such as NetBIOS used in Windows networking, may not be fully supported or work exactly as they would on a local, area network (LAN). VPN variants, such as Virtual Private LAN Service (VPLS), and layer 2 tunnelling protocols, are designed to overcome this limitation.

VPNs allow employees to securely access the corporate intranet while travelling outside the office. Similarly, VPNs securely connect geographically separated offices of an organization, creating one cohesive network. VPN technology is also used by individual Internet users to secure their wireless transactions, to circumvent geo-restrictions and censorship, and to connect to proxy servers for the purpose of protecting personal identity and location.

Thursday, April 14, 2016

Hardware virtualization techniques

This approach is described as full virtualization of the hardware, and can be implemented using a type 1 or type 2 hypervisor: a type 1 hypervisor runs directly on the hardware, and a type 2 hypervisor runs on another operating system, such as Linux or Windows. Each virtual machine can run any operating system supported by the underlying hardware. Users can thus run two or more different "guest" operating systems simultaneously, in separate "private" virtual computers.

The pioneer system using this concept was IBM's CP-40, the first (1967) version of IBM's CP/CMS (1967–1972) and the precursor to IBM's LPAR VM family (1972–present). With the VM architecture, most users run a relatively simple interactive computing single-user operating system, CMS, as a "guest" on top of the VM control program (VM-CP). This approach kept the CMS design simple, as if it were running alone; the control program quietly provides multitasking and resource management services "behind the scenes". In addition to CMS communication and other system tasks are performed by multitasking VMs (RSCS, GCS, TCP/IP, UNIX), and users can run any of the other IBM operating systems, such as MVS, even a new CP itself or now z/OS. Even the simple CMS could be run in a threaded environment (LISTSERV, TRICKLE). z/VM is the current version of VM, and is used to support hundreds or thousands of virtual machines on a given mainframe. Some installations use Linux on z Systems to run Web servers, where Linux runs as the operating system within many virtual machines.

Full virtualization is particularly helpful in operating system development, when experimental new code can be run at the same time as older, more stable, versions, each in a separate virtual machine. The process can even be recursive: IBM debugged new versions of its virtual machine operating system, VM, in a virtual machine running under an older version of VM, and even used this technique to simulate new hardware.

The standard x86 processor architecture as used in the modern PCs does not actually meet the Popek and Goldberg virtualization requirements. Notably, there is no execution mode where all sensitive machine instructions always trap, which would allow per-instruction virtualization.

Despite these limitations, several software packages have managed to provide virtualization on the x86 architecture, even though dynamic recompilation of privileged code, as first implemented by VMware, incurs some performance overhead as compared to a VM running on a natively virtualizable architecture such as the IBM System/370 or Motorola MC68020. By now, several other software packages such as Virtual PC, VirtualBox, Parallels Workstation and Virtual Iron manage to implement virtualization on x86 hardware.

Intel and AMD have introduced features to their x86 processors to enable virtualization in hardware.

As well as virtualization of the resources of a single machine, multiple independent nodes in a cluster can be combined and accessed as a single virtual NUMA machine.

Monday, April 11, 2016

Process virtual machines

        A process VM, sometimes called an application virtual machine, or Managed Runtime Environment (MRE), runs as a normal application inside a host OS and supports a single process. It is created when that process is started and destroyed when it exits. Its purpose is to provide a platform-independent programming environment that abstracts away details of the underlying hardware or operating system, and allows a program to execute in the same way on any platform.

A process VM provides a high-level abstraction – that of a high-level programming language (compared to the low-level ISA abstraction of the system VM). Process VMs are implemented using an interpreter; performance comparable to compiled programming languages can be achieved by the use of just-in-time compilation.

This type of VM has become popular with the Java programming language, which is implemented using the Java virtual machine. Other examples include the Parrot virtual machine, and the .NET Framework, which runs on a VM called the Common Language Runtime. All of them can serve as an abstraction layer for any computer language.

A special case of process VMs are systems that abstract over the communication mechanisms of a (potentially heterogeneous) computer cluster. Such a VM does not consist of a single process, but one process per physical machine in the cluster. They are designed to ease the task of programming concurrent applications by letting the programmer focus on algorithms rather than the communication mechanisms provided by the interconnect and the OS. They do not hide the fact that communication takes place, and as such do not attempt to present the cluster as a single machine.

Unlike other process VMs, these systems do not provide a specific programming language, but are embedded in an existing language; typically such a system provides bindings for several languages (e.g., C and FORTRAN). Examples are PVM (Parallel Virtual Machine) and MPI (Message Passing Interface). They are not strictly virtual machines, as the applications running on top still have access to all OS services, and are therefore not confined to the system model.

Thursday, April 7, 2016

System virtual machines

System virtual machine advantages

  • Multiple OS environments can co-exist on the same primary hard drive, with a virtual partition that allows sharing of files generated in either the "host" operating system or "guest" virtual environment. Adjunct software installations, wireless connectivity, and remote replication, such as printing and faxing, can be generated in any of the guest or host operating systems. Regardless of the system, all files are stored on the hard drive of the host OS.
  • Application provisioning, maintenance, high availability and disaster recovery are inherent in the virtual machine software selected.
  • Can provide emulated hardware environments different from the host's instruction set architecture (ISA), through emulation or by using just-in-time compilation.


The main disadvantages of VMs are:
  • A virtual machine is less efficient than an actual machine when it accesses the host hard drive indirectly.
  • When multiple VMs are concurrently running on the hard drive of the actual host, adjunct virtual machines may exhibit a varying and/or unstable performance (speed of execution and malware protection). This depends on the data load imposed on the system by other VMs, unless the selected VM software provides temporal isolation among virtual machines.
  • Malware protections for VMs are not necessarily compatible with the "host", and may require separate software.
  • The desire to run multiple operating systems was the initial motivation for virtual machines, so as to allow time-sharing among several single-tasking operating systems. In some respects, a system virtual machine can be considered a generalization of the concept of virtual memory that historically preceded it. IBM's CP/CMS, the first systems to allow full virtualization, implemented time sharing by providing each user with a single-user operating system, the CMS. Unlike virtual memory, a system virtual machine entitled the user to write privileged instructions in their code. This approach had certain advantages, such as adding input/output devices not allowed by the standard system.
  • As technology evolves virtual memory for purposes of virtualization, new systems of memory overcommitment may be applied to manage memory sharing among multiple virtual machines on one actual computer operating system. It may be possible to share "memory pages" that have identical contents among multiple virtual machines that run on the same physical machine, what may result in mapping them to the same physical page by a technique known as Kernel SamePage Merging. This is particularly useful for read-only pages, such as those that contain code segments; in particular, that would be the case for multiple virtual machines running the same or similar software, software libraries, web servers, middleware components, etc. The guest operating systems do not need to be compliant with the host hardware, thereby making it possible to run different operating systems on the same computer (e.g., Microsoft Windows, Linux, or previous versions of an operating system) to support future software.
  • The use of virtual machines to support separate guest operating systems is popular in regard to embedded systems. A typical use would be to run a real-time operating system simultaneously with a preferred complex operating system, such as Linux or Windows. Another use would be for novel and unproven software still in the developmental stage, so it runs inside a sandbox. Virtual machines have other advantages for operating system development, and may include improved debugging access and faster reboots.
  • Multiple VMs running their own guest operating system are frequently engaged for server consolidation


Monday, April 4, 2016

Virtual machine,VM

         In computing, a virtual machine (VM) is an emulation of a particular computer system. Virtual machines operate based on the computer architecture and functions of a real or hypothetical computer, and their implementations may involve specialized hardware, software, or a combination of both.

       Various different kinds of virtual machines exist, each with different functions. System virtual machines (also known as full virtualization VMs) provide a complete substitute for the targeted real machine and a level of functionality required for the execution of a complete operating system. A hypervisor uses native execution to share and manage hardware, allowing multiple different environments, isolated from each other, to be executed on the same physical machine. Modern hypervisors use hardware-assisted virtualization, which provides efficient and full virtualization by using virtualization-specific hardware capabilities, primarily from the host CPUs. Process virtual machines are designed to execute a single computer program by providing an abstracted and platform-independent program execution environment. Some virtual machines, such as QEMU, are designed to also emulate different architectures and allow execution of software applications and operating systems written for another CPU or architecture. Operating-system-level virtualization allows the resources of a computer to be partitioned via the kernel's support for multiple isolated user space instances, which are usually called containers and may look and feel like real machines to the end users.

Friday, April 1, 2016

Unix shell

Unix shell

A Unix shell is a command-line interpreter or shell that provides a traditional Unix-like command line user interface. Users direct the operation of the computer by entering commands as text for a command line interpreter to execute, or by creating text scripts of one or more such commands. Users typically interact with a Unix shell using a terminal emulator, however, direct operation via serial hardware connections, or networking session, are common for server systems. All Unix shells provide filename wildcarding, piping, here documents, command substitution, variables and control structures for condition-testing and iteration.

Concept

The most generic sense of the term shell means any program that users employ to type commands. A shell hides the details of the underlying operating system and manages the technical details of the operating system kernel interface, which is the lowest-level, or "inner-most" component of most operating systems.

In Unix-like operating systems, users typically have many choices of command-line interpreters for interactive sessions. When a user logs in to the system interactively, a shell program is automatically executed for the duration of the session. The type of shell, which may be customized for each user, is typically stored in the user's profile, for example in the local passwd file or in a distributed configuration system such as NIS or LDAP; however, the user may execute any other available shell interactively.

The Unix shell is both an interactive command language as well as a scripting programming language, and is used by the operating system as the facility to control (shell script) the execution of the system. Shells created for other operating systems often provide similar functionality.

On hosts with a windowing system, like OS X, some users may never use the shell directly. On Unix systems, the shell has historically been the implementation language of system startup scripts, including the program that starts a windowing system, configures networking, and many other essential functions. However, some system vendors have replaced the traditional shell-based startup system (init) with different approaches, such as systemd.

Graphical user interfaces for Unix, such as GNOME, KDE, and Xfce are sometimes called visual or graphical shells.

Sunday, March 27, 2016

Hypervisor

       A hypervisor or virtual machine monitor (VMM) is a piece of computer software, firmware or hardware that creates and runs virtual machines.

        A computer on which a hypervisor is running one or more virtual machines is defined as a host machine. Each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources.

Classification

       In their 1974 article "Formal Requirements for Virtualizable Third Generation Architectures" Gerald J. Popek and Robert P. Goldberg classified two types of hypervisor:

Type-1, native or bare-metal hypervisors

       These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. For this reason, they are sometimes called bare metal hypervisors. A guest operating system runs as a process on the host. The first hypervisors, which IBM developed in the 1960s, were native hypervisors. These included the test software SIMMON and the CP/CMS operating system (the predecessor of IBM's z/VM). Modern equivalents include Oracle VM Server for SPARC, Oracle VM Server for x86, the Citrix XenServer, VMware ESX/ESXi and Microsoft Hyper-V 2008/2012.

Type-2 or hosted hypervisors

      These hypervisors run on a conventional operating system just as other computer programs do. Type-2 hypervisors abstract guest operating systems from the host operating system. VMware Workstation, VMware Player, VirtualBox and QEMU are examples of type-2 hypervisors.

     However, the distinction between these two types is not necessarily clear. Linux's Kernel-based Virtual Machine (KVM) and FreeBSD's bhyve are kernel modules that effectively convert the host operating system to a type-1 hypervisor.Nevertheless, since Linux distributions and FreeBSD are still general-purpose operating systems, with other applications competing for VM resources, KVM and bhyve can also be categorized as type-2 hypervisors.

      Microsoft's Hyper-V is often run on top of Windows Server full installation. This would make it type-2 in this scenario.

    In 2012, a US software development company called LynuxWorks proposed a type-0 (zero) hypervisor one with no kernel or operating system whatsoeverwhich might not be entirely possible.

Thursday, March 24, 2016

Logical partition

A logical partition, commonly called an LPAR, is a subset of computer's hardware resources, virtualized as a separate computer. In effect, a physical machine can be partitioned into multiple logical partitions, each hosting a separate operating system.

History

IBM developed the concept of hypervisors (virtual machines in CP-40 and CP-67) and in 1972 provided it for the S/370 as Virtual Machine Facility/370. IBM introduced the Start Interpretive Execution (SIE) instruction (designed specifically for the execution of virtual machines) as part of 370-XA architecture on the 3081, as well as VM/XA versions of VM to exploit it. PR/SM is a type-1 Hypervisor based on the CP component of VM/XA that runs directly on the machine level and allocates system resources across LPARs to share physical resources. It is a standard feature on IBM System z, System p, and System i machines.

The terms PR/SM and LPAR are often used interchangeably, including in IBM documentation. Formally, LPAR designates the logical partitioning function and mode of operation, whereas PR/SM is the commercial designation of the feature.

This technology was later developed separately by Amdahl, and Hitachi Data Systems for their implementations of the ESA/390 architecture in the mid 1980s; and continued by IBM for the System z and System i architectures. LPAR and PR/SM reconfigurations can be made without rebooting the computer, i.e., while some LPARs remain active. Reconfigurations can include changing channel path definitions and device definitions.

z/VM supports the z/Architecture HiperSockets function for high-speed TCP/IP communication among virtual machines and logical partitions (LPARs) within the same IBM zSeries server. This function uses an adaptation of the Queued-Direct Input/Output (QDIO) high-speed I/O protocol.

IBM later introduced LPARs to their midrange iSeries and pSeries servers in 1999 and 2001, respectively, albeit with varying technical specifications. Multiple operating systems are compatible with LPARs, including z/OS, z/VM, z/VSE, z/TPF, AIX, Linux, and i/OS. In storage systems, such as the IBM TotalStorage DS8000, LPARs allow for multiple virtual instances of a storage array to exist within a single physical array.

In first part of 2010 year, Fujitsu announced availability of its x86 64 PRIMEQUEST line of servers, which support LPARs.

In second part of 2011 year, Hitachi has announced availability of CB2000 and CB320 blade systems,which support LPAR on x86 64 hardware.

Monday, March 21, 2016

Oracle VM Server for SPARC

          Logical Domains (LDoms or LDOM) is the server virtualization and partitioning technology for SPARC V9 processors. It was first released by Sun Microsystems in April 2007. After the Oracle acquisition of Sun in January 2010, the product has been re-branded as Oracle VM Server for SPARC from version 2.0 onwards.

         Each domain is a full virtual machine with a reconfigurable subset of hardware resources. Domains can be securely live migrated between servers while running. Operating systems running inside Logical Domains can be started, stopped, and rebooted independently. A running domain can be dynamically reconfigured to add or remove CPUs, RAM, or I/O devices without requiring a reboot.

Thursday, March 17, 2016

Secure Shell

          Secure Shell, or SSH, is a cryptographic (encrypted) network protocol to allow remote login and other network services to operate securely over an unsecured network.

        SSH provides a secure channel over an unsecured network in a client-server architecture, connecting an SSH client application with an SSH server. Common applications include remote command-line login and remote command execution, but any network service can be secured with SSH. The protocol specification distinguishes between two major versions, referred to as SSH-1 and SSH-2.

         The most visible application of the protocol is for access to shell accounts on Unix-like operating systems, but it sees some limited use on Windows as well. In 2015, Microsoft announced that they would include native support for SSH in a future release.

               SSH was designed as a replacement for Telnet and for unsecured remote shell protocols such as the Berkeley rlogin, rsh, and rexec protocols. Those protocols send information, notably passwords, in plaintext, rendering them susceptible to interception and disclosure using packet analysis.The encryption used by SSH is intended to provide confidentiality and integrity of data over an unsecured network, such as the Internet, although files leaked by Edward Snowden indicate that the National Security Agency can sometimes decrypt SSH, allowing them to read the content of SSH sessions.

Key management

          On Unix-like systems, the list of authorized public keys is typically stored in the home directory of the user that is allowed to log in remotely, in the file ~/.ssh/authorized_keys.This file is respected by SSH only if it is not writable by anything apart from the owner and root. When the public key is present on the remote end and the matching private key is present on the local end, typing in the password is no longer required (some software like Message Passing Interface (MPI) stack may need this password-less access to run properly). However, for additional security the private key itself can be locked with a passphrase.

            The private key can also be looked for in standard places, and its full path can be specified as a command line setting (the option -i for ssh). The ssh-keygen utility produces the public and private keys, always in pairs.

           SSH also supports password-based authentication that is encrypted by automatically generated keys. In this case the attacker could imitate the legitimate server side, ask for the password, and obtain it (man-in-the-middle attack). However, this is possible only if the two sides have never authenticated before, as SSH remembers the key that the server side previously used. The SSH client raises a warning before accepting the key of a new, previously unknown server. Password authentication can be disabled.

Monday, March 14, 2016

Remote Desktop Protocol

        Remote Desktop Protocol (RDP) is a proprietary protocol developed by Microsoft, which provides a user with a graphical interface to connect to another computer over a network connection. The user employs RDP client software for this purpose, while the other computer must run RDP server software.

        Clients exist for most versions of Microsoft Windows (including Windows Mobile), Linux, Unix, OS X, iOS, Android, and other operating systems. RDP servers are built into Windows operating systems; an RDP server for Unix and OS X also exists. By default, the server listens on TCP port 3389 and UDP port 3389.

          Microsoft currently refers to their official RDP server software as Remote Desktop Connection, formerly "Terminal Services Client".

          The protocol is an extension of the ITU-T T.128 application sharing protocol.

Friday, March 11, 2016

Paravirtualization

        In computing, paravirtualization is a virtualization technique that presents a software interface to virtual machines that is similar, but not identical to that of the underlying hardware.

      The intent of the modified interface is to reduce the portion of the guest's execution time spent performing operations which are substantially more difficult to run in a virtual environment compared to a non-virtualized environment. The paravirtualization provides specially defined 'hooks' to allow the guest(s) and host to request and acknowledge these tasks, which would otherwise be executed in the virtual domain (where execution performance is worse). A successful paravirtualized platform may allow the virtual machine monitor (VMM) to be simpler (by relocating execution of critical tasks from the virtual domain to the host domain), and/or reduce the overall performance degradation of machine-execution inside the virtual-guest.

        Paravirtualization requires the guest operating system to be explicitly ported for the para-API a conventional OS distribution that is not paravirtualization-aware cannot be run on top of a paravirtualizing VMM. However, even in cases where the operating system cannot be modified, components may be available that enable many of the significant performance advantages of paravirtualization. For example, the Xen Windows GPLPV project provides a kit of paravirtualization-aware device drivers, licensed under the terms of the GPL, that are intended to be installed into a Microsoft Windows virtual-guest running on the Xen hypervisor.

Linux paravirtualization support

         At the USENIX conference in 2006 in Boston, Massachusetts, a number of Linux development vendors (including IBM, VMware, Xen, and Red Hat) collaborated on an alternative form of paravirtualization, initially developed by the Xen group, called "paravirt-ops". The paravirt-ops code (often shortened to pv-ops) was included in the mainline Linux kernel as of the 2.6.23 version, and provides a hypervisor-agnostic interface between the hypervisor and guest kernels. Distribution support for pv-ops guest kernels appeared starting with Ubuntu 7.04 and RedHat 9. Xen hypervisors based on any 2.6.24 or later kernel support pv-ops guests, as does VMware's Workstation product beginning with version 6

Monday, March 7, 2016

Operating-system-level virtualization,Flexibility

Operating-system-level virtualization is a server-virtualization method where the kernel of an operating system allows for multiple isolated user-space instances, instead of just one. Such instances (sometimes called containers, software containers, virtualization engines (VE), virtual private servers (VPS), or jails) may look and feel like a real server from the point of view of its owners and users.

On Unix-like operating systems, one can see this technology as an advanced implementation of the standard chroot mechanism. In addition to isolation mechanisms, the kernel often provides resource-management features to limit the impact of one container's activities on other containers.

Uses

Operating-system-level virtualization is commonly used in virtual hosting environments, where it is useful for securely allocating finite hardware resources amongst a large number of mutually-distrusting users. System administrators may also use it, to a lesser extent, for consolidating server hardware by moving services on separate hosts into containers on the one server.

Other typical scenarios include separating several applications to separate containers for improved security, hardware independence, and added resource management features. The improved security provided by the use of a chroot mechanism, however, is nowhere near ironclad. Operating-system-level virtualization implementations capable of live migration can also be used for dynamic load balancing of containers between nodes in a cluster.

Flexibility

Operating-system-level virtualization is not as flexible as other virtualization approaches since it cannot host a guest operating system different from the host one, or a different guest kernel. For example, with Linux, different distributions are fine, but other operating systems such as Windows cannot be hosted.

Solaris partially overcomes the above described limitation with its branded zones feature, which provides the ability to run an environment within a container that emulates an older Solaris 8 or 9 version in a Solaris 10 host. Linux branded zones (referred to as "lx" branded zones) are also available on x86-based Solaris systems, providing a complete Linux userspace and support for the execution of Linux applications; additionally, Solaris provides utilities needed to install Red Hat Enterprise Linux 3.x or CentOS 3.x Linux distributions inside "lx" zones. However, in 2010 Linux branded zones were removed from Solaris; in 2014 they were reintroduced in Illumos, which is the open source Solaris fork, supporting 32-bit Linux kernels.

Friday, March 4, 2016

Linux-VServer

      Linux-VServer is a virtual private server implementation that was created by adding operating system-level virtualization capabilities to the Linux kernel. It is developed and distributed as open-source software.

        The project was started by Jacques Gélinas. It is now maintained by Herbert Pötzl of Austria and is not related to the Linux Virtual Server project, which implements network load balancing.

       Linux-VServer is a jail mechanism in that it can be used to securely partition resources on a computer system (such as the file system, CPU time, network addresses and memory) in such a way that processes cannot mount a denial-of-service attack on anything outside their partition.

        Each partition is called a security context, and the virtualized system within it is the virtual private server. A chroot-like utility for descending into security contexts is provided. Booting a virtual private server is then simply a matter of kickstarting init in a new security context; likewise, shutting it down simply entails killing all processes with that security context. The contexts themselves are robust enough to boot many Linux distributions unmodified, including Debian and Fedora.

          Virtual private servers are commonly used in web hosting services, where they are useful for segregating customer accounts, pooling resources and containing any potential security breaches. To save space on such installations, each virtual server's file system can be created as a tree of copy-on-write hard links to a "template" file system. The hard link is marked with a special filesystem attribute and when modified, is securely and transparently replaced with a real copy of the file.

            Linux-VServer provides two branches, stable (2.2.x), and devel (2.3.x) for 2.6-series kernels and a single stable branch for 2.4-series. A separate stable branch integrating the grsecurity patch set is also available.

Advantages
  • Virtual servers share the same system call interface and do not have any emulation overhead.
  • Virtual servers do not have to be backed by opaque disk images, but can share a common file system and common sets of files (through copy-on-write hard links). This makes it easier to back up a system and to pool disk space amongst virtual servers.
  • Processes within the virtual server run as regular processes on the host system. This is somewhat more memory-efficient and I/O-efficient than whole-system emulation, which cannot return "unused" memory or share a disk cache with the host and other virtual servers.
  • Processes within the virtual server are queued on the same scheduler as on the host, allowing guests processes to run concurrently on SMP systems. This is not trivial to implement with whole-system emulation.
  • Networking is based on isolation rather than virtualization, so there is no additional overhead for packets.
  • Smaller plane for security bugs. Only one kernel with small additional code-base compared to 2+ kernels and large interfaces between them.
  • Rich Linux scheduling features such as real-time priorities.


Tuesday, March 1, 2016

Hybrid server

          A hybrid hosting service or hybrid server is a type of Internet hosting which is a combination of a physically-hosted server with virtualization technology.

          A hybrid server is a new kind of dedicated server that offers both the power of a classic dedicated server and the flexibility of cloud computing. On hybrid dedicated servers hardware are 100% allocated to user. The price is lower than for dedicated servers.

         The server is separated into hybrid server environments using Red Hat KVM or any other virtualization. Each hybrid environment is securely isolated and has guaranteed resources available to it which ensures a high level of performance and responsiveness. A hybrid server combines all of the benefits of virtualization technology with the performance of a full dedicated server. So, an administrator can use automation to suspend, restart, or reinstall the operating system. One large server is split into a few (normally two) Hybrid servers.

Saturday, February 27, 2016

Emulator

             In computing, an emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system

Emulators in computing

         Emulation refers to the ability of a computer program in an electronic device to emulate (imitate) another program or device. Many printers, for example, are designed to emulate Hewlett-Packard LaserJet printers because so much software is written for HP printers. If a non-HP printer emulates an HP printer, any software written for a real HP printer will also run in the non-HP printer emulation and produce equivalent printing.

           A hardware emulator is an emulator which takes the form of a hardware device. Examples include the DOS-compatible card installed in some old-world Macintoshes like Centris 610 or Performa 630 that allowed them to run PC programs and FPGA-based hardware emulators.

           In a theoretical sense, the Church-Turing thesis implies that (under the assumption that enough memory is available) any operating environment can be emulated within any other. However, in practice, it can be quite difficult, particularly when the exact behavior of the system to be emulated is not documented and has to be deduced through reverse engineering. It also says nothing about timing constraints; if the emulator does not perform as quickly as the original hardware, the emulated software may run much more slowly than it would have on the original hardware, possibly triggering time interrupts that alter behavior.

Emulation in preservation

            Emulation is a strategy in digital preservation to combat obsolescence. Emulation focuses on recreating an original computer environment, which can be time-consuming and difficult to achieve, but valuable because of its ability to maintain a closer connection to the authenticity of the digital object.

            Emulation addresses the original hardware and software environment of the digital object, and recreates it on a current machine. The emulator allows the user to have access to any kind of application or operating system on a current platform, while the software runs as it did in its original environment. Jeffery Rothenberg, an early proponent of emulation as a digital preservation strategy states, "the ideal approach would provide a single extensible, long-term solution that can be designed once and for all and applied uniformly, automatically, and in synchrony (for example, at every refresh cycle) to all types of documents and media".He further states that this should not only apply to out of date systems, but also be upwardly mobile to future unknown systems.Practically speaking, when a certain application is released in a new version, rather than address compatibility issues and migration for every digital object created in the previous version of that application, one could create an emulator for the application, allowing access to all of said digital objects.