Web applications are a dominant part of most enterprise IT portfolios and Platform-as-a-Service (PaaS) products offer a compelling way to easily deploy and manage these applications. However, PaaS have proven tricky for vendors to explain, and therefore difficult for customers to understand. In this post, we’ll discuss the reason you should consider using PaaS products, what Tier 3 has to offer, and how you can deploy a web application to a PaaS in a matter of minutes.
Benefits of PaaS
What exactly is PaaS? Basically, it’s a way of delivering an application platform as a service. Developers don’t interface directly with infrastructure (e.g. servers, networks, load balancers) but rather, focus on building and deployment applications through a set of exposed services in a managed fabric. PaaS simplifies the deployment and management of modern web applications while making those applications more resilient and functional. How can PaaS add value to your organization? Let’s drill into some specifics:
- Reduce server sprawl with a centralized host for web applications. How many web servers are sitting relatively idle in your data center because they are only running a handful of applications? Server sprawl can be a major issue as each IT project requisitions its own hardware for application development/staging/QA/production. What about all your websites for customers and marketing campaigns? It’s possible that you’re using many different servers (and even providers!) to host all of those individual websites. PaaS can offer a centralized fabric that can be sized and optimized for hundreds of internal or external web applications.
- Save money by adding resources only when you need them. Many PaaS products have a concept of automatic scale or user-driven resizing to account for spikes or dips in utilization. Before cloud computing, organizations typically sized their infrastructure for peaks and accepted that their environment would be underutilized the majority of the time. Now, it’s possible to deploy a web application with a 128MB memory allocation, and instantly double it when needed. Need to spread the workload across multiple machines? Simply issue a command to add the application to another node in the PaaS fabric. No calls to the operations team, no formal “deployment” exercises. PaaS makes it possible to size and scale applications on demand, which makes it easier for you to manage the overall environment.
- Focus on your application, and don’t sweat the infrastructure. One of the most important benefits of PaaS is that it abstracts the infrastructure away from the application, and the developer. Developers deploy to a fabric, not a server. There’s no need for the IT project team to provision web or database servers. Simply push applications to the existing PaaS environment. The infrastructure itself is managed closely by an operations team and automation is included at all levels to deliver automatic patching, scaling, monitoring and more.
- Multi-tenancy and high-availability baked in. PaaS products are designed to deliver high-availability to multiple applications (or “tenants”) and are therefore scaled out to provide significant compute capacity. As such, you’ll find many PaaS products with built-in load balancing services, failover when servers fail, concurrency management, and more. All of these features boost reliability and performance for each application hosted in the PaaS. Even applications not specifically designed for PaaS can conceivably be deployed to a PaaS with little to no code refactoring.
- Avoid unnecessary duplication by using consolidated application services. When most people think of PaaS they think of hosting web applications, but some of the best capabilities are those offered by complimentary services. Most PaaS products offer add-on services like databases, storage, identity management, messaging, caching and more. You’ll also find some PaaS products that offer business services such as service catalogs, and API management and monitoring. Developers can use these services when building their web applications and not have to provision or locate hardware to host those services at runtime. These services simply exist inside the PaaS and are available to all applications deployed there.
- Deliver “IT as a Service” through measured usage for easy chargebacks. A core tenet of cloud computing is “pay as you go” and measured usage. A true PaaS is built upon a “cloudy” foundation that tracks utilization and delivers an all-up cost to the user at the end of the month (or whenever the user checks their charges). Because of this cost transparency, it’s easy for organizations to deliver “IT as a service” by offering a PaaS for internal/external websites and passing along the usage-based invoices to each department.
All of this helps developers produce faster deployments while giving system administrators a more streamlined operations responsibility.
Why Tier 3 Web Fabric?
Tier 3 has its own PaaS product – called Web Fabric – that is based on Pivotal’s Cloud Foundry project. We’ve added the open-source Iron Foundry extensions so that we can offer some of the best language and framework support in the industry. Unlike the shared PaaS services offered by others, Web Fabric is provisioned uniquely for each customer. This gives you the isolation you need, while still offering a robust platform for all the custom applications used by your organization. The default Web Fabric environment consists of five total servers and can support dozens of web applications.
Why might you choose to use the Tier 3 Web Fabric to host your modern web applications? We like to point out at least five reasons:
- Support for the programming languages you already use. Most IT shops are heterogeneous and use technologies from multiple vendors. You may have written a number of enterprise-class web applications in .NET or Java, but also have departments that make use of Ruby or PHP. If you’re doing more mobile development, you might have started looking at Node.js for high performing web applications. Tier 3’s Web Fabric supports all those programming languages and more. Instead of using multiple PaaS products or infrastructure clouds to host your diverse application portfolio, use a single fabric for all of them!
- Application services to cover your scenarios. Need a relational database? We offer MySQL, PostgreSQL, and Microsoft SQL Server. Looking for a NoSQL repository? Web Fabric has Redis and MongoDB. RabbitMQ is also available when you want to add a durable message queue to your solution. In addition, each Web Fabric comes with New Relic monitoring for web applications. This excellent application performance management tool gives you deep insight that helps identify bottlenecks and monitor application health.
- Cloud Foundry ecosystem. There’s no doubting the impact of Cloud Foundry on the PaaS industry. This open source project was launched in 2010 and has been adopted by multiple PaaS vendors. Not only does this make it straightforward to move applications between Cloud Foundry-compliant clouds, but also means that there are multiple parties creating tools that work for any Cloud Foundry environment. From the Windows-based Cloud Foundry Explorer, to the OSX-friendly Project Thor, to web-based development environments, there’s a growing ecosystem of vendors and tools to help you be successful with Cloud Foundry.
- Enterprise-class infrastructure. Tier 3’s network of highly resilient, globally distributed infrastructure is optimized for performance throughout the stack. And since Web Fabric runs on the Tier 3 enterprise cloud, your applications will be powered by high performing storage, multiple VPN options, security services, and much more.
- IaaS and PaaS, better together. Not all workloads fit into a PaaS platform, and not all applications require dedicated infrastructure. By offering our customers enterprise-class infrastructure in addition to Web Fabric, we’ve provided two useful hosting mechanisms in the same cloud. Keep your PaaS applications geographically close to your IaaS applications and data, and share the same management tools, security profile, and networking configuration.
Deploying to Web Fabric from a Cloud-based Development Environment
Developers can push their application to Web Fabric in a number of ways. While most developers are familiar with command line interfaces and GUI tools that run on their desktop, a new crop of cloud-based integrated development environments (IDEs) can make PaaS deployments even simpler. Cloud IDEs offer excellent collaboration capabilities, easy accessibility, and “no-touch” setup.
One such cloud IDE is Codenvy. This tool works natively with Cloud Foundry, making it easy to build Java/Ruby/Python/PHP applications and then push them to Web Fabric. After signing up for a free account, the developer is presented with the option to link to GitHub or any Git repository.
Codenvy uses a handy “new project” wizard experience to help the developer choose which programming language to use, and then which (supported) PaaS to push to. In the short animation below, observe how I created a new Java Spring project, chose Cloud Foundry (Web Fabric) as a destination, finish the wizard and publish the application to Web Fabric.
The Codenvy IDE includes many developer productivity features including type-ahead coding (i.e. “intellisense”), code generation, formatting tools, and much more. Changing the application code and re-publishing the application to Web Fabric is simple. Notice how easy it is to resize my application (e.g. memory, instance count) at any time!
Besides simply deploying applications, Codenvy supports simple management of existing applications. From the PaaS –> Cloud Foundry –> Applications menu, I can see all the applications that I’ve deployed to Web Fabric and stop/start/restart/delete any of them.
Developers using cloud-based IDEs don’t get all the features of desktop IDEs (like access to local resources, plug-ins), but they are an increasingly viable choice for developers who are trying new technologies or need access to their IDE from any computer.
With our enterprise-class infrastructure and platform cloud, Tier 3 is uniquely positioned to address your cloud needs. Web Fabric is an ideal host for your modern web applications and its Cloud Foundry heritage makes it compatible with a wide array of tools including cloud-based IDEs like Codenvy.
Interested in taking a look at Web Fabric? Contact us for a demonstration and free trial!
It’s easy for cloud customers to get confused about the roles and responsibilities of their internal team and their cloud vendor. That confusion is especially evident when it comes to application availability and business continuity planning. How does disaster recovery differ from high availability? Does my cloud provider automatically load balance my application servers? The answers to these questions are critical, but sometimes overlooked until a crisis occurs. In this post, we’ll talk about load balancing, high availability, and disaster recovery in the cloud, and what the Tier 3’s cloud infrastructure has to offer.
What is it?
Wikipedia describes load balancing as:
Load balancing is a computer networking method to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy.
You commonly see this technique employed in web applications where multiple web servers work together to handle inbound traffic. There are at least two reasons why load balancing is employed:
- The required capacity is too large for a single machine. When running processes that consume a large amount of system resources (e.g. CPU and memory), it often makes sense to employ multiple servers to distribute the work instead of constantly adding capacity to a single server. In plenty of cases, it’s not even possible to allocate enough memory or CPU to a single machine to handle all of the work! Load balancing across multiple servers makes it possible to host high traffic websites or run complex data processing jobs that demand more resources than a single server can deliver.
- Looking for more reliability and flexibility in a solution deployment. Even if you *could* run an entire server application on a single server, it may not be a good idea. Load balancing can increase reliability by providing many servers able to do the same job. If one server becomes unavailable, the others can simply pick up the additional work until a new server comes online. Software updates become easier since a server can simply be taken out of the load balancing pool when a patch or reboot is necessary. Load balancing gives system administrators more flexibility in maintaining servers without negatively impacting the application as a whole.
Load balancing can be accomplished using either a “push” or a “pull” model. For web applications or database clusters that sit behind a load balancer, inbound requests are pushed to the pool of servers based on an algorithm such as round-robin. In this scenario, servers await traffic sent to them by the load balancer. It’s also possible to use a “pull” model where work requests are added to a centralized “queue” and a collection of servers retrieve those requests from that queue when they are available. For instance, consider big data processing scenarios where many servers work to analyze data and return results. Each server takes a chunk of work and the overall processing load is distributed across many machines.
How can Tier 3 help?
Tier 3 offers multiple load balancing options to our customers. All customers have access to a free, shared load balancer. This load balancer service – based on the powerful Citrix Netscaler product – provides a range of capabilities including SSL offloading for higher performance, session persistence (known as “sticky sessions”), and routing of TCP, HTTP and HTTPS traffic for up to three servers. To use this service today, send a request to email@example.com. We plan to launch a self-service version of this capability in the very near future.
If you’re looking for more control over the load balancing configuration or have higher bandwidth needs, you can deploy a dedicated load balancer (virtual appliance) into the Tier 3 cloud. This “bring your own load balancer” option leverage internal expertise you may have with a particular vendor. It also gives you complete control over the load balancer setup so that you can modify the routing algorithm or enable/disable features that matter to your business.
What is it?
Returning to Wikipedia, high availability is defined as:
High availability is a system design approach and associated service implementation that ensures a prearranged level of operational performance will be met during a contractual measurement period.
High availability is described through service level agreements and achieved through an architecture that focuses on constant availability even in the face of failures at any level of the system. While load balancing introduces redundancy, it’s not a strategy that alone can provide high availability. Servers sitting behind a load balancer may be running, but that doesn’t mean that they are available!
Availability addresses the ability to withstand failure from all angles including the network, storage, and even the data center itself. Enterprise cloud services like those from Tier 3 are built on a highly available architecture that uses redundancy at all levels to ensure that no single component failure in a data center impacts overall system availability. This includes “passive” redundancy built into data centers to overcome power or internet provider failures, as well as “active” redundancy that leverages sophisticated monitoring to detect issues and initiate failover procedures.
All of our customers get platform-level high availability when they use the Tier 3 cloud “out of the box.” That means that you can rely on us for your workloads knowing that our architecture is well-designed and highly redundant. However – back to the introductory paragraph – it’s the customer’s responsibility to design a highly-available application architecture. Simply deploying an application to our cloud doesn’t make it highly available. For example, if you deploy a single Microsoft SQL Server instance in the Tier 3 cloud, you do not have a highly available database. If that database server goes offline or network access is interrupted, your application’s availability will be impacted. To design a highly available Microsoft SQL Server solution, you have multiple options. One choice is to create a cluster of database servers (where all nodes are active at the same time, or, nodes sit passively by waiting to be engaged) that access data from a shared disk. When a failure in the active node is detected, the alternate node is automatically called into action.
How can Tier 3 help?
Designing highly available systems is complex. Unfortunately, no cloud provider can offer a checkbox labeled “Make this application highly available!” in their cloud management portal. Crafting a highly available system involves a methodical approach that navigates through every single layer of the system and identifies single points of failure that should be made redundant. For components that cannot be made redundant, it’s important to make sure that the application can continue to run even if that component becomes unavailable.
The Tier 3 professional services team consists of skilled, experienced architects who have designed and built cloud-scale solutions for customers. They can sit with your team and make sure that you’ve taken advantage of every relevant feature that Tier 3 has to offer, while helping you make sure that your system landscape is constructed in a way that will ensure continual availability.
Don’t forget to regularly test your high availability design in order to uncover weak points or ensure that configurations remain valid.
What is it?
Once more we turn to Wikipedia which defines disaster recovery as:
Disaster recovery (DR) is the process, policies and procedures that are related to preparing for recovery or continuation of technology infrastructure which are vital to an organization after a natural or human-induced disaster. Disaster recovery is a subset of business continuity. While business continuity involves planning for keeping all aspects of a business functioning in the midst of disruptive events, disaster recovery focuses on the IT or technology systems that support business functions.
DR is all about how you handle unexpected events. Typically, your cloud provider has to declare a disaster before explicitly initiating DR procedures. A brief network outage or storage failure in a data center is usually not enough to trigger a disaster response. There are two phrases that you often hear when defining a DR plan. A recovery point objective (RPO) describes the maximum window of data that can be lost because of a disaster. For example, an RPO of 12 hours means that it is possible that when you get back online after a disaster, you may have lost the most recent 12 hours of data collected by your systems. A recovery time objective (RTO) identifies how long the IT systems (and processes) can be offline before being restored. For example, an RTO of 48 hours means that it may take two days before the systems lost in the disaster are brought back online and becoming usable again.
How can Tier 3 help?
Tier 3 customers have disaster protection natively in the platform. We offer two classes of storage: standard and premium. The major difference is that standard storage get five days of rolling backups within a given data center, while premium storage users get fourteen days of rolling backups including replication to an in-country data center. Tier 3 is powered by global data centers in multiple countries and we use storage replication to enable you to get back online within 8 hours (RTO) and with a maximum RPO of 24 hours.
While this provides assurances against losing all of your data in the event of a disaster, it still may not provide the level of business continuity that you need. If your business cannot tolerate more than a few moments of downtime, even in the event of a disaster, then it’s critical to architect a solution that can withstand the loss of an entire data center. Returning to our earlier Microsoft SQL Server example, consider the ways to construct a highly available database that remains online with minimal data loss, even during a disaster. SQL Server offers replication technologies like database mirroring and AlwaysOn that make it possible to do near-real time replication across geographies.
The experts in the Tier 3 services team can help you identify all the DNS, networking, compute and storage considerations for building systems that are not only highly available within a data center, but across data centers.
It’s often the case that load balancing, high availability and disaster recovery lapses don’t surface until it’s too late. While Tier 3 does everything we can to architect our platform for maximum availability and resiliency, our customers still retain responsibility for deploying their systems in a manner that meets their performance and business continuity needs. We are eager to talk to you about how to validate your existing cloud applications or design new solutions that can function at cloud scale. Contact our services team today!
Cloud adoption is growing significantly as more enterprises see the business value of having a scalable, elastic pool of computing resources at their fingertips. However, enterprise CIOs are concerned with building application silos in the cloud that don’t integrate with the rest of their systems, data, and infrastructure. One survey asked respondents to rank their areas of satisfaction for a set of SaaS applications and found that integration with on-premises systems was the area with the most frustration. Another survey found that 67% of CIOs reported problems integrating data between cloud applications. The long-term competitive advantage you gain from the cloud will likely depend – in part – on how well you can connect your assets, regardless of location. There are unique considerations for integrating with the cloud, but the core business needs remain the same. We at Tier 3 see four areas that require focus from both the cloud provider and the customer.
Each application – whether packaged or custom built – serves a unique functional purpose. Frequently, information from another applications is required to meet this purpose. For example, a CRM system may submit a query to an accounting system so that a call center agent can get a full picture of the customer’s billing history with a company. Or, an application that validates employee security badges may rely on a real-time feed of data from an ERP system that stores employee status information. Application integration is about connecting business applications at a functional level. It’s not simply data sharing, but rather, involves triggering some activity in another application by issuing requests or sending “live” business events.
So how does this affect applications in the cloud? Architects are wary of attempting synchronous remote procedure calls across the Internet. Latency is a big factor, and synchronous actions don’t scale particularly well.One alternative approach: “callbacks” where the application request is issued asynchronously, and the reply is sent to a pre-determined location that is monitored by the calling application. Or, embrace the more scalable asynchronous messaging strategy where business data is sent between systems using a fire-and-forget technique. Whether synchronous or asynchronous, application integration with cloud endpoints involves a high likelihood of encountering REST (vs. traditional SOAP) web service endpoints, so choose your tools accordingly!
To this end, you’ll come across two types of application integration products: traditional platforms that have been extended to work with the cloud, as well as entirely new platforms that are built and hosted in cloud platforms. Because each Tier 3 customer gets their own VLAN(s) that can connect to the corporate network (see Network Integration below), it’s relatively straightforward to use existing on-premise integration servers (e.g. Microsoft BizTalk Server, TIBCO ActiveMatrix Service Bus, IBM WebSphere MQ) to link to applications running in the Tier 3 cloud.
If you’re looking to do application integration between SaaS applications and servers in the Tier 3 cloud, you can either use on-premises integration servers or one of the newer cloud-based tools. For one-way messaging that requires durability but not the weight of an integration server, consider cloud-based queues such as Amazon SQS. Note that Tier 3 servers don’t receive a public IP address by default, so any integration tool that requires a “push” from the public internet to a Tier 3 server will require you to add a public IP address to the target server. If you need a full-fledged messaging engine that runs in the cloud and has adapters for cloud endpoints, consider something like the CloudHub from Mulesoft.
Data integration refers to the synchronization, transformation, quality processing, and transportation of large amounts of data between repositories. Unlike application integration, data integration is typically batch-oriented and works against data that’s already been processed by transactional systems. You’ll often find the need for data integration when doing master data management (MDM) solutions, importing dirty data from a variety of sources, or loading data warehouses for in-depth analysis.
Doing extract-transform-load (ETL) processes in the cloud introduces a few new considerations. While latency may not be as big of a factor for batch processes, bandwidth will be. Moving petabytes of data over an Internet connection is still not a speedy endeavor. Where possible, consider a Cross Connect architecture to maximize bandwidth while minimizing latency. Data integration solutions frequently include staging databases where data is manipulated or standardized as part of the processing pipeline. Depending on where the data is coming from, you may choose to stage sensitive data on your internal network instead of storing it temporarily on public cloud-based servers. Also, keep in mind that data integration tools are oriented towards relational databases, but many cloud databases leverage NoSQL designs or highly distributed architectures that may be unfamiliar to enterprise staff that primarily works with Oracle, Microsoft, and IBM technologies.
As with the application integration tools, the data integration tools market consists of existing players that may offer adapters to cloud endpoints, as well as entirely new providers that are oriented to cloud-based data repositories. Tools like Microsoft SQL Data Sync make it easy to synchronize Microsoft SQL Server databases running in Windows Azure to SQL Servers running on-premises or in public clouds like Tier 3. Traditional ETL provider Informatica has an innovative cloud service called Informatica Cloud which includes a growing set of adapters for connecting cloud databases to on-premises databases. Even the Amazon Web Services Data Pipeline service makes it simple to transfer data between AWS databases and on-premises databases. In each case, the ETL tool uses a locally-installed server agent that securely connects the data repositories to your network. This means that you do NOT need to have your internal databases exposed to the public internet in order to synchronize with cloud-based data repositories.
If you run your enterprise databases in the Tier 3 cloud, you can perform data integration using existing ETL tools or any of this new crop of cloud-friendly products.
Ideally, cloud servers are simply an extension of on-premises servers. To have a fully integrated enterprise landscape, servers in the corporate data center should be able to freely communicate with servers running off-premises. For example, Tier 3 customers use our cloud to run their enterprise collaboration environment, email infrastructure, line of business applications, and many other critical internal-facing systems. In order for these scenarios to work, the enterprise network must be extended to include the cloud network.
One choice is to set up simple client virtual private networks (VPNs) that connect an individual machine to the cloud network. In this case, an individual user would establish a VPN connection and access the application or database residing on the cloud server. However, this only works well for small businesses or temporary access to applications. For a persistent connection between networks, consider working with the cloud provider on a point-to-point VPN tunnel. This provides a much better end user experience. An even tighter integration is possible through Direct Connect. For enterprises that use one of our co-location partners for their data center hosting, Tier 3 can establish a cross-connect between the physical hardware. This ensures a high performing connection that doesn’t travel over the public internet channel. If cross-connect isn’t an option, then perhaps a MPLS network mesh with any number of major network carriers is feasible. We can easily add a secure connection from your MPLS network to the Tier 3 cloud.
Finally, security. It’s an important consideration when working with distributed systems, and identity management is an oft overlooked area. We’ve all become accustomed to countless credentials for the variety of business systems (on-premises and off-premises) that we use every day. Whether accessing cloud systems, integrating with partner systems, or enabling a remote workforce, a strong identity management strategy is key. How can employees use a single set of credentials to access a diverse range of systems across the Internet? Is centralized role-based-access-control possible or does each application have to maintain their own role hierarchy? These are among the many questions you should ask yourself when figuring out a long term identity strategy.
Identity federation is an emerging area in the cloud. There are multiple standards that come into play, including SAML, XACML, and WS-Trust. Microsoft offers its Windows Active Directory Federation Services and Windows Azure Active Directory products. You’ll also find strong products from Ping Identity and CA. As enterprises face more and more demand by employees and partners to “bring your own identity”, there will be a greater need to invest in a complete identity management solution.
Tier 3 supports SAML for access to our Control Portal. So, our customers can manage their cloud environment without ever manually logging in. This not only makes it convenient, but also creates a more secure environment where there are fewer passwords to remember and access is controlled from a central location.
By planning for all four of these integration dimensions, enterprises can more fully achieve the benefit of cloud computing while getting maximum reuse out of existing assets. Neglecting any one of these can introduce barriers to adoption or lead to inefficient or insecure workarounds.
Want our help designing your solution for each of these integration dimensions? Contact us to set up a working session with our experienced services team.
Manual environment deployments can be time-consuming and expensive. Over the years we’ve felt our customers’ frustrations: enterprise IT departments trying to be more agile in the face of business demands; ISVs that need faster time-to-money; Systems Integrators that are bogged down in repetitive work. That’s why we’re thrilled to announce the launch of Environment Engine, a toolset that automates environment and application deployments to the enterprise cloud using “Blueprints.” Blueprints contain the DNA of an environment—from host configurations, to firewall and load balancing rules, to any applications running on top. (And yes, before you ask, these tools are completely free to use for all Tier 3 customers.) With Environment Engine, the elusive IT-as-a-Service is no longer a myth. Now IT pros can create best practice-optimized Blueprints that others can use later to deploy complex applications and environments on-demand. Rollout times drop from days or weeks to hours or minutes, and because deployments are automated across the whole technology stack, build-outs are consistent and leave little room for pesky human errors. So how exactly does all of this work? Let’s get into the nitty-gritty… 1. Using the Blueprint Designer, a technical expert can create Blueprints that include host and network configurations; firewall, load balancing, and autoscale rules; and applications that will run in that environment. The resulting Blueprint can be published to private libraries for internal use or to the public Tier 3 Blueprint Library for increased exposure and adoption across organizational boundaries. 2. From the Blueprint Library, users can select the Blueprint best suited to their requirements based on variables including categories, keyword, characteristic filters such as OS or sizing, Blueprint maturity, and social feedback. 3. Because no one likes surprises on their bills, the Blueprint Builder displays estimated monthly costs of an environment, as well as any resource or software requirements. From there users can adjust pre-defined variables in the selected Blueprint to ensure proper configuration, then deploy best practice-optimized environments to the Tier 3 Enterprise Cloud Platform. Curious as to what kinds of applications you can deploy using Environment Engine? We are in the process of creating several Blueprints based on common environments, or those that may benefit from our team’s expertise. (Our goal is to expand this list over time as others publish to the public Blueprint Library and we add Blueprints to meet demand.) - Microsoft SharePoint® Server - Microsoft Exchange® Server (Single Server) - Microsoft Exchange® Server HA: Using data availability groups - Microsoft SQL Server 2008 - Active Directory - Team Foundation Services - ASP.NET & SQL Web App: Single Node - ASP.NET 2 Node Web Application: Contains front end web server and backend SQL server - LAMP Stack Check out the Environment Engine Datasheet, the recorded Environment Engine demo, or drop by Booth #213 at VMworld from August 29 –Sept 1 in Las Vegas for a live demonstration.
Toolset creates best practice-optimized, reusable “Blueprints” of complex environments for automated deployment and IT self-service delivery models
BELLEVUE, Wash.—August 24, 2011—Tier 3, Inc., an enterprise cloud platform provider, today announced the Environment Engine, a platform agnostic toolset that automates the design and deployment of complex environments and applications onto the Tier 3 Enterprise Cloud. From network and storage layer all the way through OS and application, the toolset turns complex environments into best practice-optimized, reusable “Blueprints” for deployment via new IT self service delivery models. The company also announced an initial, robust set of cross-platform Blueprints that Tier 3 will make available to its Enterprise Cloud Platform customers at launch. While third-party scripting tools automate only at the top (application deployment) or bottom (server image configuring) of the platform stack, the Tier 3 Environment Engine (see datasheet and demo) integrates these functions into a simple toolset built to interface with every aspect of the Tier 3 cloud platform. The Environment Engine toolset consists of a Blueprint Designer, Blueprint Library and Blueprint Builder that together create a seamless automation workflow to manage creation and storage of Blueprints as well as discovery and rapid deployment of these tested configurations. “Deploying complex environments and applications into the cloud can be just that – complex – with words like ‘consuming,’ ‘costly’ and ‘error prone’ coming to mind for many. Automation, on the other hand means simplicity and agility for both IT and the business,” said Jared Wray, chief technology officer, Tier 3. “The Environment Engine greatly simplifies deployment of cloud-based services and, combined with the already robust automation in our Enterprise Cloud platform, opens up new IT service delivery models for our customers.“ The Environment Engine Cloud Automation Process—Utilizing the Blueprint Designer, the application or environment owner scripts core Blueprint building blocks, including (but not limited to) all aspects of host configuration, network configuration, firewall and firewall rules, load balancing and autoscale rules, and the sequenced events based on scripts and task lists to provision applications based on upstream or downstream dependencies. The Blueprint is then uploaded to the Tier 3 Blueprint Library.—Browsing the Blueprint Library, users select the Blueprint best suited to their requirements based on variables including categories, keyword, characteristic filters such as OS or sizing, Blueprint maturity, or social feedback.—Leveraging the Blueprint Builder tool, users can then configure pre-defined variables in the selected Blueprint to deploy complete, hardened, best practice-optimized environments in the Tier 3 Enterprise Cloud in just minutes. Automation use cases for the Tier 3 Environment Engine The Environment Engine facilitates the onboarding of complex environments onto Tier 3’s Enterprise Cloud, a true enterprise-class cloud platform with 99.999 percent (“five nines”) SLA across server, network and storage; security; built in disaster recovery and predictive optimization technologies for uncompromising performance across the entire stack. Use cases in beta include:—Enterprise IT leverage Blueprints to be more agile and responsive to business demands via IT as a Service models. By enabling customer self-service of complete application environments (such as SharePoint, etc.) hosted in the Tier 3 Enterprise Cloud, IT departments speed deployment while reducing demand on IT ops resources.—Independent software vendors (ISVs) can accelerate adoption and deployment of their applications by publishing optimized Blueprints to the Tier 3 library for customer use.—Developers can integrate into the system via a full API and XML schema, integrating their systems directly with Tier 3 to automate provisioning of complex applications. Not only does this reduce operational support costs, but it dramatically improves customer experience. Cost & Availability The Environment Engine toolset is in private beta today, with general availability expected in October. The toolset is a value-added service at no extra charge to existing Tier 3 customers. At launch, Tier 3 will make available a core set of some of the most common and complex Blueprints for enterprises, including: • Microsoft SharePoint® Server • Microsoft Exchange® Server (Single Server) • Microsoft Exchange® Server HA: Using data availability groups • Microsoft SQL Server 2008 • Active Directory • Team Foundation Services • ASP.NET & SQL Web App: Single Node • ASP.NET 2 Node Web Application: Contains front end web server and backend SQL server • LAMP Stack See the Tier 3 Blueprint Engine Demo in booth 213 at VMworld August 29 –Sept 1 in Las Vegas About Tier 3 Tier 3, based in Bellevue, Wash., goes beyond traditional cloud offerings to provide an agile, self-optimizing enterprise cloud platform. Enterprises large and small depend on the company’s secure, intelligent platform to run their mission-critical, production applications and services so they can focus on their core business. They realize the cloud benefits of lower TCO and dynamic scaling delivered on an enterprise-class platform with SLAs, security, and built-in disaster recovery. Innovative technologies deliver predictive optimization for unprecedented performance at all layers. For more information, visit http://www.tier3.com.