Friday, May 29, 2015

Dedicated hosting service,hosting solution,management,colocation,colocation facilities,HVAC systems,Dedicated servers,Hybrid server,Complex Managed Hosting

A dedicated hosting service, dedicated server, or managed hosting service is a type of Dedicated servers in which the client leases an entire server not shared with anyone else. This is more flexible than shared hosting, as organizations have full control over the server(s), including choice of operating system, hardware, etc. There is also another level of dedicated or managed hosting commonly referred to as complex managed hosting. Complex Managed Hosting applies to both physical dedicated servers, Hybrid server and virtual servers, with many companies choosing a hybrid (combination of physical and virtual) hosting solution. There are many similarities between standard and complex managed hosting but the key difference is the level of administrative and engineering support that the customer pays for ? owing to both the increased size and complexity of the infrastructure deployment. The provider steps in to take over most of the management, including security, memory, storage and IT support. The service is primarily proactive in nature. Server administration can usually be provided by the hosting company as an add-on service. In some cases a dedicated server can offer less overhead and a larger return on investment. Dedicated servers are most often housed in data centers, similar to colocation facilities, providing redundant power sources and HVAC systems. In contrast to colocation, the server hardware is owned by the provider and in some cases they will provide support for operating systems or applications.

Using a dedicated hosting service offers the benefits of high performance, security, email stability, and control. Due to the relatively high price of dedicated hosting, it is mostly used by websites that receive a large volume of traffic.

Tuesday, May 26, 2015

CGI Proxy,web proxy,Richard Windmann,demonstrate

A CGI web proxy accepts target URLs using a Web form in the user's browser window, processes the request, and returns the results to the user's browser. Consequently it can be used on a device or network that does not allow "true" proxy settings to be changed. The first recorded CGI proxy was developed by American computer scientist Richard Windmann.

Some CGI proxies were set up for purposes such as making websites more accessible to disabled people, but have since been shut down due to excessive traffic, usually caused by a third party advertising the service as a means to bypass local filtering. Since many of these users don't care about the collateral damage they are causing, it became necessary for organizations to hide their proxies, disclosing the URLs only to those who take the trouble to contact the organization and demonstrate a genuine need.

Saturday, May 23, 2015

Cloud Clients,Chromebook,desktop computers,aptops, tablets and smartphones

Users access cloud computing using networked client devices, such as desktop computers, laptops, tablets and smartphones. Some of these devices  ? cloud clients ? rely on cloud computing for all or a majority of their applications so as to be essentially useless without it. Examples are thin clients and the browser-based Chromebook. Many cloud applications do not require specific software on the client and instead use a web browser to interact with the cloud application. With Ajax and HTML5 these Web user interfaces can achieve a similar, or even better, look and feel to native applications. Some cloud applications, however, support specific client software dedicated to these applications (e.g., virtual desktop clients and most email clients). Some legacy applications (line of business applications that until now have been prevalent in thin client computing) are delivered via a screen-sharing technology.

Tuesday, May 19, 2015

Bandwidth Pooling,Dedicated Hosting,availability,multi-provider,multi-homed,Prominent players,gigabyte usage model

This is a key mechanism for hosting buyers to determine which provider is offering the right pricing mechanism of bandwidth pricing.[according to whom?] Most Dedicated Hosting providers bundle bandwidth pricing along with the monthly charge for the dedicated server. Let us illustrate this with the help of an example. An average $100 server from any of the common dedicated bandwidth providers would carry 2 TB of bandwidth. Suppose you purchased 10 servers then you would have the ability to consume 2 TB of bandwidth per server. However, let us assume that given your application architecture only 2 of these 10 servers are really web facing while the rest are used for storage, search, database or other internal functions then the provider that allows bandwidth pooling would let you consume overall 20 TB of bandwidth as incoming or outbound or both depending on their policy. The provider that does not offer bandwidth pooling would just let you use 4 TB of bandwidth, and the rest of the 16 TB of bandwidth would be practically unusable. This fact is commonly known by all hosting providers, and allows hosting providers to cut costs by offering an amount of bandwidth that frequently will not be used. This is known as overselling, and allows high bandwidth customers to use more than what a host might otherwise offer, because they know that this will be balanced out by those customers who use less than the maximum allowed.

One of the reasons for choosing to outsource dedicated servers is the availability of high powered networks from multiple providers. As dedicated server providers utilize massive amounts of bandwidth, they are able to secure lower volume based pricing to include a multi-provider blend of bandwidth. To achieve the same type of network without a multi-provider blend of bandwidth, a large investment in core routers, long term contracts, and expensive monthly bills would need to be in place. The expenses needed to develop a network without a multi-provider blend of bandwidth does not make sense economically for hosting providers.

Many dedicated server providers include a service level agreement based on network up-time. Some dedicated server hosting providers offer a 100% up-time guarantee on their network. By securing multiple vendors for connectivity and using redundant hardware, providers are able to guarantee higher up-times; usually between 99-100% up-time if they are a higher quality provider. One aspect of higher quality providers is they are most likely to be multi-homed across multiple quality up-link providers, which in turn, provides significant redundancy in the event one goes down in addition to potentially improved routes to destinations.

Bandwidth consumption over the last several years has shifted from a per megabit usage model to a per gigabyte usage model. Bandwidth was traditionally measured in line speed access that included the ability to purchase needed megabits at a given monthly cost. As the shared hosting model developed, the trend towards gigabyte or total bytes transferred, replaced the megabit line speed model so dedicated server providers started offering per gigabyte.

Prominent players in the dedicated server market offer large amounts of bandwidth ranging from 500 gigabytes to 3000 gigabytes using the ?overselling? model. It is not uncommon for major players to provide dedicated servers with 1Terabyte (TB) of bandwidth or higher. Usage models based on the byte level measurement usually include a given amount of bandwidth with each server and a price per gigabyte after a certain threshold has been reached. Expect to pay additional fees for bandwidth overage usage. For example, if a dedicated server has been given 3000 gigabytes of bandwidth per month and the customer uses 5000 gigabytes of bandwidth within the billing period, the additional 2000 gigabytes of bandwidth will be invoiced as bandwidth overage. Each provider has a different model for billing. No industry standards have been set yet.

Saturday, May 16, 2015

Bandwidth and Connectivity,percentile,measurement,unmetered service,bandwidth overages,median measurement, Total transfer method, Unmetered method

Bandwidth refers to the data transfer rate or the amount of data that can be carried from one point to another in a given time period (usually a second) and is often represented in bits (of data) per second (bit/s). For example, visitors to your server, web site, or applications utilize bandwidth *Third ? Total Transfer (measured in bytes transferred)

95th percentile method

Line speed, billed on the 95th percentile, refers to the speed in which data flows from the server or device, measured every 5 minutes for the month, and dropping the top 5% of measurements that are highest, and basing the usage for the month on the next-highest measurement. This is similar to a median measurement, which can be thought of as a 50th percentile measurement (with 50% of measurements above, and 50% of measurements below), whereas this sets the cutoff at 95th percentile, with 5% of measurements above the value, and 95% of measurements below the value. This is also known as Burstable billing. Line speed is measured in bits per second (or kilobits per second, megabits per second or gigabits per second).

Unmetered method


The second bandwidth measurement is unmetered service where providers cap or control the ?top line? speed for a server. Top line speed in unmetered bandwidth is the total Mbit/s allocated to the server and configured on the switch level. For example, if you purchase 10 Mbit/s unmetered bandwidth, the top line speed would be 10 Mbit/s. 10 Mbit/s would result in the provider controlling the speed transfers take place while providing the ability for the dedicated server owner to not be charged with bandwidth overages. Unmetered bandwidth services usually incur an additional charge.

Total transfer method


Some providers will calculate the Total Transfer, which is the measurement of actual data leaving and arriving, measured in bytes. Although it is typically the sum of all traffic into and out of the server, some providers measure only outbound traffic (traffic from the server to the internet).

Wednesday, May 13, 2015

Application server,application functions,specific implementation,Web applications,Application Server Frameworks,clustering,fail-over,load-balancing

An application server can be either a software framework that provides a generalized approach to creating an application-server implementation, regard to what the application functions are,or the server portion of a specific implementation instance. In either case, the server's function is dedicated to the efficient execution of procedures (programs, routines, scripts) for supporting its applied applications.

Most Application Server Frameworks contain a comprehensive service layer model. An application server acts as a set of components accessible to the software developer through an API defined by the platform itself. For Web applications, these components are usually performed in the same running environment as its web server(s), and their main job is to support the construction of dynamic pages. However, many application servers target much more than just Web page generation: they implement services like clustering, fail-over, and load-balancing, so developers can focus on implementing the business logic.

In the case of Java application servers, the server behaves like an extended virtual machine for running applications, transparently handling connections to the database on one side, and, often, connections to the Web client on the other.

Other uses of the term may refer to the services that a server makes available or the computer hardware on which the services run.

Saturday, May 9, 2015

Anonymous HTTPS proxy

Users wanting to bypass web filtering, that want to prevent anyone from monitoring what they are doing, will typically search the internet for an open and anonymous HTTPS transparent proxy. They will then program their browser to proxy all requests through the web filter to this anonymous proxy. Those requests will be encrypted with https. The web filter cannot distinguish these transactions from, say, a legitimate access to a financial website. Thus, content filters are only effective against unsophisticated users.

Use of HTTPS proxies are detectable even without examining the encrypted data, based simply on firewall monitoring of addresses for frequency of use and bandwidth usage. If a massive amount of data is being directed through an address that is within an ISP address range such as Comcast, it is likely a home-operated proxy server. Either the single address or the entire ISP address range is then blocked at the firewall to prevent further connections.