Gunnar Assmy - Fotolia

Manage Learn to apply best practices and optimize your operations.

Red Hat CTO: Scalability, usability key RHEL 8 components

Linux software is expanding its use cases for data center and cloud options. Red Hat CTO Chris Wright details how the company is helping improve IT management and visibility.

As data center infrastructure grows beyond on-premises facilities, admins and developers need ways to effectively manage hardware through software. With Linux as the standard for many data centers, organizations must find new techniques to use the OS beyond server deployments.

In this interview, Chris Wright, vice president and CTO of Red Hat, talked about some of the recent Red Hat Enterprise Linux (RHEL) updates, how open source software can address scalability and workflow issues, and what's next for the Linux community.

Why did Red Hat focus on RHEL 8 as the foundation for hybrid and multi-cloud environments?

Chris Wright: At Red Hat, we're building hybrid clouds; our kind of mission is the open hybrid cloud. It's all built at the very bottom of the stack from Linux. One of the things that we wanted to do is make sure that Linux -- RHEL specifically -- was the best operating system to run across that open hybrid cloud.

RHEL 8 is a new release, so there's the basics of just refreshing some of the contents so it's more up to date with the latest and greatest from the RHEL community projects. But then, if we look at how to compose the operating system to make it useful in the right context and if we think about running Linux in the cloud, what can we do to improve manageability and visibility?

So, [Red Hat] started thinking of it in the context of: 'We understand the operating system; we know servers have been doing this for decades, but we should be reimagining it from the point of view of the infrastructure is an open hybrid cloud.' This was a way to really crystallize our engineering teams' thinking.

How did you shift your engineering teams' focus to a hybrid architecture?

Chris WrightChris Wright

Wright: When it comes to hardware architectures, Linux enables all kinds of hardware. It's virtual machines [VMs], it's physical machines, it's different clouds and instances, and there's nothing fundamentally new [in that sense], though a new virtual machine type might look a little bit different. But Linux is a building block for applications that are sitting across the cloud. Historically, an OS came on a CD. Today, it might come as a container.

We have to make sure we're building the right internal tooling to produce the right artifacts. We want users to have that flexibility around how they compose the OS, because a customer is always customizing how they use the operating system. But it's just accelerated and even automated in the cloud.

The other key piece is scaled manageability. We [at Red Hat] recognize the world today is so automated, distributed and scaled that we want to give the right tools and capabilities for the operations teams to manage that scale.

Why do you think scalability is so difficult for customers?

Wright: Scale brings a kind of complexity. I was talking to a customer who described it this way: 'We have upgrades we have to make to our system. If we can't do those within this time window, we will never finish upgrading our system.' It's sort of like painting the Golden Gate Bridge -- you go back [to the beginning] once you reach the end, and you'd never finish.

Once organizations get to a certain scale, developers and admins have to think about how to make tasks efficient. A human going in and logging in to every machine, typing commands and exiting is clearly going to be time-consuming, but it's also error-prone.

How can the OS help IT pros manage data centers and distributed computing operations at scale?

Wright: [Hybrid IT] is really exciting from an operating system geek point of view because it means, in a certain sense, because the hardware is cool, the OS is cool again. As long as [the industry] was thinking that it's all cloud and VMs -- that are all kind of the same -- then the OS was just there to exclusively light up the VM and support the application.

We now have different types of optimized hardware for different types of workloads or use cases, like high-performance computing or machine learning and AI. Now, all this hardware has to be enabled by the OS. That's part of what we've been thinking about with RHEL forever, but now it's just more relevant.

Now, the number of machine or instance types, it's not just small, medium and large; there's hundreds a month. And they're all optimized for different types of workloads and are exposing hardware directly into the virtual machine.

It's more interesting if admins are collecting information out of the OS and feeding that into a platform that's giving feedback on what they could do better. The integration with Red Hat Insights is that evolution of the OS. This hybrid cloud OS intelligence gives all this data; admins can feed it [into RHEL], and we can build models. Red Hat has a lot of customer information, and we can share that back with our customers through something like Red Hat Insights.

Is improving usability part of the reasoning behind the Red Hat Web Console?

Wright: The Red Hat Web Console is to make RHEL accessible to a different kind of user. The traditional user thrives on arcane knowledge and understanding the most obscure command-line options.

That's a small group of people. Technology is moving so fast that today's developers have to get a lot done. So, they'd rather have software as a service and not have to think about any of the internal details.

What do you think the Linux community is going to focus on the next few years?

Wright: We're still pretty early in [AI and machine learning], so the community will definitely be a part of development; we've got some framework proliferation right now. We're probably not done, but at some point, we'll start to see consolidation around the best tools for machine learning.

There's research that could stimulate new ways of data processing. We also have lifecycle management of data training a model and the source code for building an application; there's some similarities there. We understand source code [and how to work with the binary]. There's not quite the same tooling around data training a model. Those are areas that'll improve in the machine learning space, but [the focus] is about usability.

Then, there's this expansive distributed computing challenge we're embarking on. And that's edge computing in addition to on premises and off premises. How do we manage all that? We have all the tools, but we haven't stitched it together well enough to really do total distributed management.

This was last published in May 2019

Dig Deeper on Linux servers

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

What applications do you use RHEL for in the data center?
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close