The safest way to live test your ransomware, malware, virus defenses

You aren’t going to release a live virus on your production system, so how do you test your defenses?

In the article “The State of Ransomware in 2020“, research suggests that every 11 seconds, some business is being attacked by a cybercriminal. And in the report “The State of Ransomware 2021”, the frequency of attacks is up year over year along with the diversity of business types being attacked. Lower in the same report, you can see details from the various organizations being attacked.

Couple this with “Cybersecurity Talent Crunch to Create 350 Million Unfilled Jobs Globally by 2021,” and it is apparent that many companies will have to rely on existing worker talent to combat an ever-increasing threat. Of course, high-tech companies have high-tech talent, but what about all the other types of organizations like Government, Education, Service Industry, and Manufacturing. We all like to think we have skilled workers regardless of our industry. Still, under this new growing threat, our current in-house cybersecurity skills might not be at the level needed to provide maximum safeguard.

So what are we to do?

Continue reading

Application Archiving in the Cloud

Introducing “Cold Storage” of complete application systems in the Cloud.

Traditional application archiving is often described in one of two ways:

1) Archiving – this is where you have an application that has accumulated large amounts of historical data that exists on Tier 1 primary storage within the data center. The basic concept is that you take some of the older, infrequently accessed data and “archive” it or move it to some other read-only data warehouse that utilizes less expensive storage. The idea is to save money by reducing the pressure to expand more expensive storage and potentially reduce those costs over time. The application system remains “active” but with only newer relevant data.

Continue reading

Cloud Taxonomy

Non-Production Use Cases for the Cloud.

The following use cases are potential ways to use the cloud for non-production use cases. The key is to deliver “quickly” the infrastructure needed to perform a specific need or task. Waiting weeks for infrastructure delivery should be considered an “anti-pattern” since the cumulative time of waiting over the course of a project would be considerable. Building internal resource delivery processes with slow delivery times goes against the concepts described in works such as “The Phoenix Project,” “The Goal,” and “Healthcare Digital Transformation.” Here are a few ideas:

Continue reading

Resetting QA Test Data: The Cloud Way

Stop running database scripts to reset test data, there is a better way.

One of the common use-case problems we hear about from QA Teams is figuring out how to reset test data once any testing activity has been performed on a complex test environment. In most cases, the stories describe multi-day or multi-week processes that must be completed to reset a test system back to some known data state. If you don’t do a reset, then you are accumulating technical debt in your test data. Any repetitive testing results become suspect, and quality decisions can not be made from the testing results. In some cases, it isn’t possible to start a new iteration of QA testing without first resetting your test data from the prior test run.

Continue reading

Leverage the Cloud to help consolidate on-prem systems

Use the Cloud as your “sandbox” to experiment and do R&D for on-prem systems.

This document discusses using a cloud model to architecturally validate the possibility of consolidating multiple application servers or instances into a smaller number of physical resources that will ultimately remain on-prem. For this document, the cloud offering from Skytap is used as the example cloud for the possible approach, although the same techniques can be leveraged in other cloud offerings.

It is important to note that this document is not advocating for reengineering applications from on-prem to the cloud, though that is a possibility. Instead, the focus of this document is to describe how to leverage the cloud to help validate the design of re-organizing a large number of physical on-prem servers down to a smaller number of resources also hosted on-prem. In this case, the cloud is used as the R&D “sandbox” for key design assumptions.

Continue reading

Prepare for AIX Migration to the Cloud

AIX in the cloud is now a “thing”.

When moving your AIX workloads from on-prem to the cloud, there are two big-ticket items to initially consider for planning and execution:

  1. Mapping Resources from on-prem to the cloud equivalent
  2. Techniques for the actual movement of the images

Mapping Resources

First, get a list of all the LPARs that are candidates for migration and capture the essential attributes like CPU allocation, memory, storage , IOPS, and expected network bandwidth required for each server. If you attempt to do a straight “lift and shift,” you may or may not be able to do an exact mapping in a pure self-service model. Why? Because cloud vendors typically have “safety caps” on some resources that prevent an untrained cloud user, or a run-away automation script from doing unwanted actions.

Continue reading

Quit hiding the cloud from your developers doing dev/test

If you say a person or organization “goes to great lengths” to achieve something, it means they try very hard and perhaps do extreme things to accomplish their goal.

One example of “going to great lengths” that I’ve seen with traditional companies is how they go to great lengths to “hide” the cloud from their pool of potential technical consumers doing work like development. Instead of saying, “here it is…” they block or restrict users from direct consumption. Developers don’t directly login to Azure, AWS, or Skytap, they go to the “internal corporate portal” and fill out a web form of what they want and submit it. Then someone will eventually process it and create what is needed.

Continue reading

Chaos Engineering for Traditional Applications

Not all on-prem applications have a future in the cloud, but can those same on-prem applications leverage cloud-like capabilities to help make them more reliable?

In 2011 Netflix introduced the tool called Chaos Monkey to inject random failures into their cloud architecture as a strategy to identify design weaknesses. Fast forward to today, the concept of resiliency engineering has evolved, creating jobs called “Chaos Engineer.” Many companies like Twilio, Facebook, Google, Microsoft, Amazon, Netflix, and LinkedIn, use chaos as a way to understand their distributed systems and architectures.

But all of these companies are based on cloud-native architectures, and so the question is:

Can Chaos Engineering be applied to traditional applications that run in the data-center and will probably never be moved to the cloud?

Continue reading

The Cloud Dilemma

What to do with traditional on-prem applications that don’t appear to have a path to the cloud?

“My app can’t be moved to the cloud…..it is based on AIX or IBMi…….”

What is implied is that the app owner doesn’t want to re-engineer their application to all use cloud-native services, but instead wants to do a classic lift-and-shift of their application without making any application code changes. Since IBMi (AS/400) and AIX are based on PowerPC and not x86, the path the cloud is not apparent for these types of applications.

Continue reading