It’s easy for cloud customers to get confused about the roles and responsibilities of their internal team and their cloud vendor. That confusion is especially evident when it comes to application availability and business continuity planning. How does disaster recovery differ from high availability? Does my cloud provider automatically load balance my application servers? The answers to these questions are critical, but sometimes overlooked until a crisis occurs. In this post, we’ll talk about load balancing, high availability, and disaster recovery in the cloud, and what the Tier 3’s cloud infrastructure has to offer.
What is it?
Wikipedia describes load balancing as:
Load balancing is a computer networking method to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy.
You commonly see this technique employed in web applications where multiple web servers work together to handle inbound traffic. There are at least two reasons why load balancing is employed:
- The required capacity is too large for a single machine. When running processes that consume a large amount of system resources (e.g. CPU and memory), it often makes sense to employ multiple servers to distribute the work instead of constantly adding capacity to a single server. In plenty of cases, it’s not even possible to allocate enough memory or CPU to a single machine to handle all of the work! Load balancing across multiple servers makes it possible to host high traffic websites or run complex data processing jobs that demand more resources than a single server can deliver.
- Looking for more reliability and flexibility in a solution deployment. Even if you *could* run an entire server application on a single server, it may not be a good idea. Load balancing can increase reliability by providing many servers able to do the same job. If one server becomes unavailable, the others can simply pick up the additional work until a new server comes online. Software updates become easier since a server can simply be taken out of the load balancing pool when a patch or reboot is necessary. Load balancing gives system administrators more flexibility in maintaining servers without negatively impacting the application as a whole.
Load balancing can be accomplished using either a “push” or a “pull” model. For web applications or database clusters that sit behind a load balancer, inbound requests are pushed to the pool of servers based on an algorithm such as round-robin. In this scenario, servers await traffic sent to them by the load balancer. It’s also possible to use a “pull” model where work requests are added to a centralized “queue” and a collection of servers retrieve those requests from that queue when they are available. For instance, consider big data processing scenarios where many servers work to analyze data and return results. Each server takes a chunk of work and the overall processing load is distributed across many machines.
How can Tier 3 help?
Tier 3 offers multiple load balancing options to our customers. All customers have access to a free, shared load balancer. This load balancer service – based on the powerful Citrix Netscaler product – provides a range of capabilities including SSL offloading for higher performance, session persistence (known as “sticky sessions”), and routing of TCP, HTTP and HTTPS traffic for up to three servers. To use this service today, send a request to email@example.com. We plan to launch a self-service version of this capability in the very near future.
If you’re looking for more control over the load balancing configuration or have higher bandwidth needs, you can deploy a dedicated load balancer (virtual appliance) into the Tier 3 cloud. This “bring your own load balancer” option leverage internal expertise you may have with a particular vendor. It also gives you complete control over the load balancer setup so that you can modify the routing algorithm or enable/disable features that matter to your business.
What is it?
Returning to Wikipedia, high availability is defined as:
High availability is a system design approach and associated service implementation that ensures a prearranged level of operational performance will be met during a contractual measurement period.
High availability is described through service level agreements and achieved through an architecture that focuses on constant availability even in the face of failures at any level of the system. While load balancing introduces redundancy, it’s not a strategy that alone can provide high availability. Servers sitting behind a load balancer may be running, but that doesn’t mean that they are available!
Availability addresses the ability to withstand failure from all angles including the network, storage, and even the data center itself. Enterprise cloud services like those from Tier 3 are built on a highly available architecture that uses redundancy at all levels to ensure that no single component failure in a data center impacts overall system availability. This includes “passive” redundancy built into data centers to overcome power or internet provider failures, as well as “active” redundancy that leverages sophisticated monitoring to detect issues and initiate failover procedures.
All of our customers get platform-level high availability when they use the Tier 3 cloud “out of the box.” That means that you can rely on us for your workloads knowing that our architecture is well-designed and highly redundant. However – back to the introductory paragraph – it’s the customer’s responsibility to design a highly-available application architecture. Simply deploying an application to our cloud doesn’t make it highly available. For example, if you deploy a single Microsoft SQL Server instance in the Tier 3 cloud, you do not have a highly available database. If that database server goes offline or network access is interrupted, your application’s availability will be impacted. To design a highly available Microsoft SQL Server solution, you have multiple options. One choice is to create a cluster of database servers (where all nodes are active at the same time, or, nodes sit passively by waiting to be engaged) that access data from a shared disk. When a failure in the active node is detected, the alternate node is automatically called into action.
How can Tier 3 help?
Designing highly available systems is complex. Unfortunately, no cloud provider can offer a checkbox labeled “Make this application highly available!” in their cloud management portal. Crafting a highly available system involves a methodical approach that navigates through every single layer of the system and identifies single points of failure that should be made redundant. For components that cannot be made redundant, it’s important to make sure that the application can continue to run even if that component becomes unavailable.
The Tier 3 professional services team consists of skilled, experienced architects who have designed and built cloud-scale solutions for customers. They can sit with your team and make sure that you’ve taken advantage of every relevant feature that Tier 3 has to offer, while helping you make sure that your system landscape is constructed in a way that will ensure continual availability.
Don’t forget to regularly test your high availability design in order to uncover weak points or ensure that configurations remain valid.
What is it?
Once more we turn to Wikipedia which defines disaster recovery as:
Disaster recovery (DR) is the process, policies and procedures that are related to preparing for recovery or continuation of technology infrastructure which are vital to an organization after a natural or human-induced disaster. Disaster recovery is a subset of business continuity. While business continuity involves planning for keeping all aspects of a business functioning in the midst of disruptive events, disaster recovery focuses on the IT or technology systems that support business functions.
DR is all about how you handle unexpected events. Typically, your cloud provider has to declare a disaster before explicitly initiating DR procedures. A brief network outage or storage failure in a data center is usually not enough to trigger a disaster response. There are two phrases that you often hear when defining a DR plan. A recovery point objective (RPO) describes the maximum window of data that can be lost because of a disaster. For example, an RPO of 12 hours means that it is possible that when you get back online after a disaster, you may have lost the most recent 12 hours of data collected by your systems. A recovery time objective (RTO) identifies how long the IT systems (and processes) can be offline before being restored. For example, an RTO of 48 hours means that it may take two days before the systems lost in the disaster are brought back online and becoming usable again.
How can Tier 3 help?
Tier 3 customers have disaster protection natively in the platform. We offer two classes of storage: standard and premium. The major difference is that standard storage get five days of rolling backups within a given data center, while premium storage users get fourteen days of rolling backups including replication to an in-country data center. Tier 3 is powered by global data centers in multiple countries and we use storage replication to enable you to get back online within 8 hours (RTO) and with a maximum RPO of 24 hours.
While this provides assurances against losing all of your data in the event of a disaster, it still may not provide the level of business continuity that you need. If your business cannot tolerate more than a few moments of downtime, even in the event of a disaster, then it’s critical to architect a solution that can withstand the loss of an entire data center. Returning to our earlier Microsoft SQL Server example, consider the ways to construct a highly available database that remains online with minimal data loss, even during a disaster. SQL Server offers replication technologies like database mirroring and AlwaysOn that make it possible to do near-real time replication across geographies.
The experts in the Tier 3 services team can help you identify all the DNS, networking, compute and storage considerations for building systems that are not only highly available within a data center, but across data centers.
It’s often the case that load balancing, high availability and disaster recovery lapses don’t surface until it’s too late. While Tier 3 does everything we can to architect our platform for maximum availability and resiliency, our customers still retain responsibility for deploying their systems in a manner that meets their performance and business continuity needs. We are eager to talk to you about how to validate your existing cloud applications or design new solutions that can function at cloud scale. Contact our services team today!
Cloud adoption is growing significantly as more enterprises see the business value of having a scalable, elastic pool of computing resources at their fingertips. However, enterprise CIOs are concerned with building application silos in the cloud that don’t integrate with the rest of their systems, data, and infrastructure. One survey asked respondents to rank their areas of satisfaction for a set of SaaS applications and found that integration with on-premises systems was the area with the most frustration. Another survey found that 67% of CIOs reported problems integrating data between cloud applications. The long-term competitive advantage you gain from the cloud will likely depend – in part – on how well you can connect your assets, regardless of location. There are unique considerations for integrating with the cloud, but the core business needs remain the same. We at Tier 3 see four areas that require focus from both the cloud provider and the customer.
Each application – whether packaged or custom built – serves a unique functional purpose. Frequently, information from another applications is required to meet this purpose. For example, a CRM system may submit a query to an accounting system so that a call center agent can get a full picture of the customer’s billing history with a company. Or, an application that validates employee security badges may rely on a real-time feed of data from an ERP system that stores employee status information. Application integration is about connecting business applications at a functional level. It’s not simply data sharing, but rather, involves triggering some activity in another application by issuing requests or sending “live” business events.
So how does this affect applications in the cloud? Architects are wary of attempting synchronous remote procedure calls across the Internet. Latency is a big factor, and synchronous actions don’t scale particularly well.One alternative approach: “callbacks” where the application request is issued asynchronously, and the reply is sent to a pre-determined location that is monitored by the calling application. Or, embrace the more scalable asynchronous messaging strategy where business data is sent between systems using a fire-and-forget technique. Whether synchronous or asynchronous, application integration with cloud endpoints involves a high likelihood of encountering REST (vs. traditional SOAP) web service endpoints, so choose your tools accordingly!
To this end, you’ll come across two types of application integration products: traditional platforms that have been extended to work with the cloud, as well as entirely new platforms that are built and hosted in cloud platforms. Because each Tier 3 customer gets their own VLAN(s) that can connect to the corporate network (see Network Integration below), it’s relatively straightforward to use existing on-premise integration servers (e.g. Microsoft BizTalk Server, TIBCO ActiveMatrix Service Bus, IBM WebSphere MQ) to link to applications running in the Tier 3 cloud.
If you’re looking to do application integration between SaaS applications and servers in the Tier 3 cloud, you can either use on-premises integration servers or one of the newer cloud-based tools. For one-way messaging that requires durability but not the weight of an integration server, consider cloud-based queues such as Amazon SQS. Note that Tier 3 servers don’t receive a public IP address by default, so any integration tool that requires a “push” from the public internet to a Tier 3 server will require you to add a public IP address to the target server. If you need a full-fledged messaging engine that runs in the cloud and has adapters for cloud endpoints, consider something like the CloudHub from Mulesoft.
Data integration refers to the synchronization, transformation, quality processing, and transportation of large amounts of data between repositories. Unlike application integration, data integration is typically batch-oriented and works against data that’s already been processed by transactional systems. You’ll often find the need for data integration when doing master data management (MDM) solutions, importing dirty data from a variety of sources, or loading data warehouses for in-depth analysis.
Doing extract-transform-load (ETL) processes in the cloud introduces a few new considerations. While latency may not be as big of a factor for batch processes, bandwidth will be. Moving petabytes of data over an Internet connection is still not a speedy endeavor. Where possible, consider a Cross Connect architecture to maximize bandwidth while minimizing latency. Data integration solutions frequently include staging databases where data is manipulated or standardized as part of the processing pipeline. Depending on where the data is coming from, you may choose to stage sensitive data on your internal network instead of storing it temporarily on public cloud-based servers. Also, keep in mind that data integration tools are oriented towards relational databases, but many cloud databases leverage NoSQL designs or highly distributed architectures that may be unfamiliar to enterprise staff that primarily works with Oracle, Microsoft, and IBM technologies.
As with the application integration tools, the data integration tools market consists of existing players that may offer adapters to cloud endpoints, as well as entirely new providers that are oriented to cloud-based data repositories. Tools like Microsoft SQL Data Sync make it easy to synchronize Microsoft SQL Server databases running in Windows Azure to SQL Servers running on-premises or in public clouds like Tier 3. Traditional ETL provider Informatica has an innovative cloud service called Informatica Cloud which includes a growing set of adapters for connecting cloud databases to on-premises databases. Even the Amazon Web Services Data Pipeline service makes it simple to transfer data between AWS databases and on-premises databases. In each case, the ETL tool uses a locally-installed server agent that securely connects the data repositories to your network. This means that you do NOT need to have your internal databases exposed to the public internet in order to synchronize with cloud-based data repositories.
If you run your enterprise databases in the Tier 3 cloud, you can perform data integration using existing ETL tools or any of this new crop of cloud-friendly products.
Ideally, cloud servers are simply an extension of on-premises servers. To have a fully integrated enterprise landscape, servers in the corporate data center should be able to freely communicate with servers running off-premises. For example, Tier 3 customers use our cloud to run their enterprise collaboration environment, email infrastructure, line of business applications, and many other critical internal-facing systems. In order for these scenarios to work, the enterprise network must be extended to include the cloud network.
One choice is to set up simple client virtual private networks (VPNs) that connect an individual machine to the cloud network. In this case, an individual user would establish a VPN connection and access the application or database residing on the cloud server. However, this only works well for small businesses or temporary access to applications. For a persistent connection between networks, consider working with the cloud provider on a point-to-point VPN tunnel. This provides a much better end user experience. An even tighter integration is possible through Direct Connect. For enterprises that use one of our co-location partners for their data center hosting, Tier 3 can establish a cross-connect between the physical hardware. This ensures a high performing connection that doesn’t travel over the public internet channel. If cross-connect isn’t an option, then perhaps a MPLS network mesh with any number of major network carriers is feasible. We can easily add a secure connection from your MPLS network to the Tier 3 cloud.
Finally, security. It’s an important consideration when working with distributed systems, and identity management is an oft overlooked area. We’ve all become accustomed to countless credentials for the variety of business systems (on-premises and off-premises) that we use every day. Whether accessing cloud systems, integrating with partner systems, or enabling a remote workforce, a strong identity management strategy is key. How can employees use a single set of credentials to access a diverse range of systems across the Internet? Is centralized role-based-access-control possible or does each application have to maintain their own role hierarchy? These are among the many questions you should ask yourself when figuring out a long term identity strategy.
Identity federation is an emerging area in the cloud. There are multiple standards that come into play, including SAML, XACML, and WS-Trust. Microsoft offers its Windows Active Directory Federation Services and Windows Azure Active Directory products. You’ll also find strong products from Ping Identity and CA. As enterprises face more and more demand by employees and partners to “bring your own identity”, there will be a greater need to invest in a complete identity management solution.
Tier 3 supports SAML for access to our Control Portal. So, our customers can manage their cloud environment without ever manually logging in. This not only makes it convenient, but also creates a more secure environment where there are fewer passwords to remember and access is controlled from a central location.
By planning for all four of these integration dimensions, enterprises can more fully achieve the benefit of cloud computing while getting maximum reuse out of existing assets. Neglecting any one of these can introduce barriers to adoption or lead to inefficient or insecure workarounds.
Want our help designing your solution for each of these integration dimensions? Contact us to set up a working session with our experienced services team.
Tier 3 recently launched a new version of the tier3.com website. This was a complete site redesign and an important step in explaining why Tier 3 is a premier choice for your cloud computing needs. This redesign was led by Nathan Young, Tier 3’s talented Creative Director and UI Designer. I sat down with Nate and asked him a few questions about the goals and technology behind the new website.
Richard: I suspect that when you planned the Tier 3 re-design, you also looked at what our peers in the industry have done with their own web presence. Without naming names, what sort of things did you see that you liked, and disliked?
Nate: One thing we noticed while doing a competitive audit was that many cloud company websites felt very “cards close to the vest.” Granted, there would be tons of information, but nothing that actually shows what the customer experience is like. It felt like as if there was a standard checklist of benefits and specs that had to be on the site, but nothing to supports those claims with product experience demonstrations.
Part of my job as Creative Director for Tier 3 is being an experience designer, so I appreciate anytime one of our competitors–or any company for that matter–demonstrates the actual product experience through things like screencasts, screenshots, or demo accounts. It really puts your product (and company) out there, and demonstrates a level of confidence that I think is lacking from many traditional enterprise vendors. Cloud providers or hosts in general, at a certain level can all have the max number of IOPS, bandwidth, available CPU or memory, but our customer experience is what I believe to be a key differentiator and competitive advantage.
Richard: What are the hallmarks of a useful, visually appealing corporate website? How do you balance the need to capture the visitor’s attention while not creating a superficial experience?
Nate: Clearly communicating what it is your company does and making it easy for potential customers to know what their experience will be like should they choose you. Once you’ve got that, then the rest is easy.
Richard: What was missing in the previous Tier 3 website and what did you want to make sure we had in this version?
Nate: Honestly, we suffered from the very thing I didn’t like listed above. We didn’t show the customer experience. Therefore it was harder than necessary to communicate our value to peers and customers.
Richard: A good CMS seems to be critical to an agile, maintainable website. How did you go about identifying the CMS that Tier 3 uses?
Nate: Besides the standard list of features a CMS is supposed to deliver (security, stability, scalability) one to the first considerations was using a CMS that had the flexibility to allow us to execute our creative vision. We ended up choosing ExpressionEngine by Ellis Lab. It met all our needs, plus it has a great community of developers around it.
Richard: What are the technologies used in the new Tier 3 website?
Richard: What aspect of the site are you the most proud of, and why?
Nate: Customer experience is one of the major differentiators for Tier 3, so I’m most proud of showing the product experience. That, and the fact that we can easily iterate on the site content and layout with our content management system. In fact, we’re working on improvements right now…
We at Tier 3 never like to see fellow cloud providers experience downtime as it hurts the reputation of our industry. Following an outage by a couple major cloud providers last week, many pundits came out of the woodwork to scold the customers of these cloud services that experienced corresponding downtime. Why? It’s become “common knowledge” that if a user of cloud services experiences downtime, then they haven’t properly architected their apps for the cloud. I wonder why we assume that every business has the engineering prowess of cloud pioneers like Netflix. Cloud users are rightly encouraged to build and deploy distributed applications that can withstand the failure of any component(s), but the reality is that this doesn’t always happen because of one or more of these reasons:
They Don’t Know Any Better
While many of us have spent years in the cloud, it’s easy to forget that this is an entirely new domain for the vast majority of enterprise customers. To be sure, principles of good architecture and highly available systems have been around for decades, but we recognize that cloud computing introduces its own wrinkles to those existing patterns. It’s up to all of those in the industry to help educate others on the right architecture, tools, and infrastructure that are needed to build truly cloud-scale applications.
Guess what? Very few organizations have the in-house architects, developers and operations pros to plan and build multi-tier, globally distributed cloud applications. Doing this requires advanced knowledge of modern web technologies, database repositories, storage systems, and networking configuration. So in some cases, this awesome rush to the cloud has left organizations without the technologists they need to build highly available, scalable cloud apps.
Also, enterprise IT shops have data centers full of modern and legacy commercial-off-the-shelf (COTS) software that is not built for the cloud. While nearly any credible COTS product has a reference architecture for a highly available deployment, we see plenty of such products that (a) still have single points of failure, (b) only operate efficiently when housed physically together in the same data center, and (c) have complex disaster recovery procedures that don’t easily support an instant failover. These cloud customers may simply not be able to refactor their existing systems to survive an outage in the data center that hosts it.
It’s often said that cloud customers can get whatever availability they want to pay for. That is almost certainly true, but it brings to the surface a point that often seems lost on those who batter those businesses that go offline in an outage: a comprehensive DR plan isn’t cheap.
Some businesses go offline during a cloud outage because they’ve made the conscious choice to run that risk. Running a hot backup that is a complete mirror of production requires constant synchronization at (often) double the overall cost. Many organizations choose to incur this cost because uptime is their top priority, while other businesses accept downtime as an occasional fact of life. Just because someone actively chooses to save money and tolerate downtime doesn’t mean that they don’t “get the cloud.”
While Tier 3 has strong SLAs based on the reliability of our platform, we can’t protect users against their own design decisions. But we can abstract a lot of the complexity away from the customer, so that many architecture best practices “come for free” with our platform. We have made a strategic choice to engineer a platform that makes life a little easier for enterprises that don’t have the resources or types of applications that are a perfect fit for cloud computing.
How do we help organizations that may not have the personnel skills or types of applications that are cloud-ready?
We run on enterprise-class hardware. While most cloud vendors freely advertise that they run commodity hardware that may fail unexpectedly, Tier 3 has invested in powerful hardware at each layer of our stack. While no infrastructure is infallible and failures WILL happen, our infrastructure investment have proven to give our customers a more reliable experience. This is especially true for their applications that cannot scale on dozens of cheap commodity servers.
Services like load balancing are built-in at no additional cost. We strive to make it as easy as possible for enterprises to avoid single points of failure, and redundancy is pervasive in the Tier 3 architecture. We surface some of these capabilities up to our customers, including free access to our load balancing software. This makes it simpler to design and deploy highly available web software.
Customers get built-in backup and recovery services for virtual machines at no additional cost. Every Tier 3 customer gets VM-level snapshots taken automatically on a daily basis and stored for up to 14 days. The snapshots for any given data center are stored in an alternate data center to ensure that customers can quickly stand up a new environment that mirrors the previous one (if one didn’t exist already). While customers can lose up to a day’s worth of data by relying solely on our automated snapshots, Tier 3 still provides a level of protection against unexpected failures.
Organizations building cloud apps should demand that developers carefully design fault-tolerant software that can take advantage of the scale and distributed nature of the cloud. Likewise, you need an operations staff with the automation in place to regularly test their cloud infrastructure and be able to quickly recover from failures. But Tier 3 thinks it should be radically easier to reduce your risk when hiccups to occur. We’re not at a point where every enterprise has such capabilities, and Tier 3 is here to make that transition to cloud software easier.
In last week’s cloud software release, we pushed out a subtle new capability that should pack a big punch for our customers. Tier 3 is constantly looking for ways to engineer a better server management experience for our IaaS customers, and we think that this a great example of that focus. In this blog post, I’ll describe this new feature, show you where to access it, and when you will want to use it.
What is it?
In a nutshell, users can now easily execute arbitrary script commands when creating or managing servers in our cloud. Windows users can choose between PowerShell and Windows Command scripts, while Linux users may use SSH scripts. Both the Command and SSH scripts execute directly on the target machine(s) whereas PowerShell scripts take advantage of the “remote PowerShell” capability.
Where do I access it?
We expose this capability in the two most relevant places. First, blueprint designers can include the Script activity when orchestrating new server environments.
When this Script task package is added to a blueprint, the blueprint designer is asked to choose both the target server and execution mode (PowerShell/Command/SSH), and then enter the actual script statement. In the “Script” textbox, the user inputs a single statement or multiple statements.
While this Script task is a very helpful component of the server provisioning process, we think that it’s even more valuable later on when administrators have to manage collections of servers. As a reminder, Tier 3 cloud servers are organized into “Groups” which are more than just superficial containers. Rather, groups empower administrators to manage sets of servers as a single unit and perform bulk actions against those servers. So, it made perfect sense to also add the Script task here so that administrators could quickly run commands against any or all of the servers in a given group.
When would I use it?
Can’t you already upload customer-specific scripts to your Tier 3 script library today? Sure you can, but we wanted to support even greater agility and allow administrators to run quick, arbitrary commands without going through the traditional “write-upload-approve-select” workflow process. Imagine being able to quickly and reliably perform the following operations across an entire stack of servers at once: turn off/on machine services, restart web applications, delete log files, open a firewall port, or update Windows registry values. I’m sure that you could come up with countless more examples of actions that can be performed with a single script statement.
Example #1 – Enabling Services Across Servers
Let’s walk through an example. Assume that we have a set of servers that perform other functions until they are needed to act as web servers. These Windows servers, which have Microsoft’s IIS web server software already installed, have both their IIS website disabled as well as their World Wide Web Publishing Service turned off (and thus aren’t listening for HTTP requests). As expected, trying to serve up a web page on this machine results in an error.
What if an administrator wants to rapidly turn on the WWW Service and the “Default Web Site” on the server? They *could* go server by server and manually turn things on. However, that’s time-consuming and error prone. Instead, what if they had a quick PowerShell script that takes care of all that?
To get started, I click on the Group Tasks button and choose Execute Script. In the prompt that follows, I select the Script package and apply the following multi-statement PowerShell command (Start-Service “W3SVC”; import-module webadministration; Start-WebSite -Name “Default Web Site”).
On the next prompt, I’m asked which of the group’s servers to apply this to.
That’s all there is to it! The corresponding blueprint finishes executing the script against both machines in just 12 seconds. I should now be able to access the website on either of these two servers.
I confirmed that this was the case by logging into my pair of server and seeing that my WWW service was running and website was available.
Example #2 – Executing Server-side Scripts
In another case, assume that each server has a set of scripts available on one of its storage drives. One of those scripts edits a log file whenever a new version of the software application gets installed in the server farm. After performing a software update across the server group, our administrator wants to trigger this server-side batch file.
First off, notice that I have a Windows Command script (WriteLog.bat) installed on the server. This batch file simply writes a new entry in the server’s log file (see below).
Next, I create a new Group Task, select the Script package, set the script type as Command, and input the local batch file to execute.
Almost immediately, the Windows Command executes the local batch script and updates the log file.
Tier 3 Blueprints and Group Management are engineered to make server administration work at scale. This new Script task helps ensure that quick actions can be consistently and reliably executed across a vast number of machines. While we still encourage customers to upload (complex) scripts that need to be used over and over again, we hope that this new feature offers a nice alternative for simple actions!