- rclone - rsync for cloud storage
- Practical Deep Learning For Coders—18 hours of lessons for free
- 6 R's of a Cloud Migration - SPR
- Standage on VR
- Monki Gras 2017: Claire Giordano - Packaging - and the Gone Girl with the Dragon Tattoo on the Train - YouTube - This talk on packaging by @clairegiordano at @monkigras was great! (Loved the title too>)
Tuesday, February 28, 2017
Links for 02-28-2017
Friday, February 24, 2017
Podcast: Talking open source and communities with {code} by Dell EMC
Josh Bernstein, VP of Technology, and Clint Kitson, Technical Director for {code} by Dell EMC sat down with me at the Open Source Leadership Summit to talk about their plans for this strategic initiative.
{code} by Dell EMC
Audio:
Link to MP3 (00:13:22)
Link to OGG (00:13:22)
{code} by Dell EMC
Audio:
Link to MP3 (00:13:22)
Link to OGG (00:13:22)
Podcast: Security and Core Infrastructure Initiative with Nicko Van Someren
As the CTO of the Linux Foundation, Nicko Van Someren also heads the Cloud Infrastructure Initiative. The CII was created in the wake of high visibility issues with widely-used but poorly funded open source infrastructure projects. (Most notably, the Heartbleed vulnerability with OpenSSL.) In this podcast, Nicko discusses how the CII works, his strategy moving forward, and how consumers of open source software can improve their security outcomes.
In addition to discussing the CII directly, Nicko also talked about encouraging open source developers to think about security as a high priority throughout the development process--as well as the need to cultivate this sort of thinking, and to get buy-in, across the entire community.
Nicko also offered advice about keeping yourself safe as a consumer of open source. His first point was that you need to know what code you have in your product. His second was to get involved with open source projects that are important to your product because "open source projects fail when the community around them fails."
Core Infrastructure Initiative, which includes links to a variety of resources created by the CII
Audio:
Link to MP3 (00:15:01)
Link to OGG (00:15:01)
Transcript:
In addition to discussing the CII directly, Nicko also talked about encouraging open source developers to think about security as a high priority throughout the development process--as well as the need to cultivate this sort of thinking, and to get buy-in, across the entire community.
Nicko also offered advice about keeping yourself safe as a consumer of open source. His first point was that you need to know what code you have in your product. His second was to get involved with open source projects that are important to your product because "open source projects fail when the community around them fails."
Core Infrastructure Initiative, which includes links to a variety of resources created by the CII
Audio:
Link to MP3 (00:15:01)
Link to OGG (00:15:01)
Transcript:
Gordon
Haff: I'm sitting here with Nicko van Someren, who's the CTO of the
Linux Foundation, and he heads the Core Infrastructure Initiative. Nicko, give
a bit of your background, and explain what the CII is?
Nicko
van Someren: Sure. My background's in security. I've been in the
industry‑side of security for 20 plus years, but I joined the Linux Foundation
a year ago to head up the Core Infrastructure Initiative, which is a program to
try and drive improvement in the security outcomes in open‑source projects. In
particular, in the projects that underpin an awful lot of the Internet and the
businesses that we run on it. The infrastructural components, those bits of
open source that we all depend on, even if we don't see them on a day‑to‑day
basis.
Gordon:
Around the time that you came in, you've been in the job, what, a little
over a year, is that right? There were some pretty high visibility issues with
some of that infrastructure.
Nicko:
Yeah, and I think it goes back a couple of years further. Around three
years ago, the Core Infrastructure Initiative ‑‑ we call it the CII ‑‑ was set
up, largely in the wake of the Heartbleed bug, which impacted nearly 70 percent
of the web servers on the planet.
We
saw a vulnerability in a major open‑source project, which had very profound
impact on people across the board, whether they were in the open‑source
community, or whether they were running commercial systems, or whether they were
building products on top of open source. All of these people were impacted by
this very significant bug.
While
the community moved swiftly to fix the bug and get the patch out there, it
became very apparent that as the world becomes more dependent on open‑source
software, it becomes more and more critical that those who are dependent on it
support the development of those projects and support improving the security
outcomes of those projects.
Gordon:
Many of the projects that we're talking about there, was a tragedy of the
commons sort of situation, where you had a few volunteers ‑‑ not being paid by
anyone, asking for donations on their PayPal accounts-- who, in many cases,
were responsible for these very critical systems.
Nicko:
Absolutely. Probably trillions of dollars of business were being done in
2014 on Open SSL, and yet in 2013, they received 3,000 bucks worth of donations
from industry to support the development of the project. This is quite common
for the projects that are under the hood, not the glossy projects that
everybody sees.
The
flagship projects get a lot of traction with a big community around them, but
there's all of this plumbing underneath that is often maintained by very small
communities ‑‑ often one or two people ‑‑ without the financial support that
comes with having big businesses putting big weight behind them.
Gordon:
What exactly does the CII do? You don't really code, as I understand it.
Nicko:
Well, I code in my spare time, but the CII doesn't develop code itself,
for the most part. What we do is, we work to identify at‑risk projects that are
high‑impact but low‑engagement.
We
try to support those projects with things like doing security audits where
appropriate, by occasionally putting engineers directly on coding, often putting
resources in architecture and security process to try to help them help
themselves by giving them the tools they need to improve security outcomes.
We're
funding the development of new security testing tools. We're providing tools to
help projects assess themselves against well‑understood security practices
that'll help give better outcomes. Then, when they don't meet all the criteria,
help them achieve those criteria so that they can get better security outcomes.
Gordon:
In terms of the projects under the CII, how do you think about that?
What's the criteria?
Nicko:
We try to take a fairly holistic approach. Sometimes we're investing
directly in pieces of infrastructure that we all rely on, things like OpenSSL,
Bouncy Castle, GnuPG, or OpenSSH, other security‑centric projects.
But
also things like last year, we were funding a couple of initiatives in network
time, those components that we're all working with, but we don't necessarily
see at the top layer. We're also funding tooling and task framework, so we have
been putting money into a project called Frama‑C, which is a framework for C
testing.
We've
been funding The Fuzzing Project, which is an initiative to do fuzz testing on
open‑source projects and find vulnerabilities and report them and get them
fixed.
We've
been working with the Reproducible Build project to get binary reproducibility
of build processes, so the people can be sure that when they download a binary,
they know that it matches what would have been built if they downloaded the
source.
We're
also funding some more educational programs, for instance, the Badging Program
allows people to assess themselves against a set of practices which are known
good security practices, and they get a little badge for their GitHub project
or for their website if they meet those criteria.
We
have a Census Project, where we've been pooling different sets of data about
the engagement in projects and the level of bug reporting and the quickness of
turn‑around of bug fixes, and the impact of those projects in terms of who's
dependent on it, and try to synthesize some information about how much risk
there is.
Then,
publish those risk scores and encourage fixes. We're trying to take a mixture
of some fairly tactical approaches, but also have investment in some strategic
approaches, which are going to lead to all open‑source projects getting better
security outcomes in the long run.
Gordon:
How do you split those? Certainly, some of the projects, particularly
early on, it was very tactical, "There's frankly a house fire going on
here, and it needs to be put out."
Then,
some of the things that you're doing in terms of the assessment checklists and
things like that, that feels much more strategic and forward‑looking. How do
you balance those two, or if you could put a percentage, even, "Oh, I
spend 30 percent of my time doing this?"
Nicko:
That's, of course, the perennial question. We have finite resources and
huge need for this. Resource allocation is what I ask input from my board
members for how they think. We, historically, have had a fairly even split
between the tactical and the strategic.
Going
forwards, we're trying to move to probably put more into the strategic stuff,
because we feel like we can get better leverage, more magnification of the
effect, if we put money into a tool and the capabilities to use that tool. I
think one of the things we're looking at for 2017 is work to improve the
usability of a lot of security tools.
There's
no shortage of great tools for doing static analysis or fuzz testing, but there
is often a difficulty in making it easy for you to integrate those into a
continuous test process for an open‑source project. Trying to build things to
make it easier to deploy the existing open‑source tools is an area in the
strategic spin that we want to put a lot into in 2017.
Gordon:
As we also look forward at some of the areas that are developing in this
point, Automotive Grade Linux, for example, AirNav's things, there's new
vectors of threats coming in, and areas of infrastructure that maybe
historically weren't that important from a security perspective are becoming
much more so. What's on your radar in that regard?
Nicko:
I think, obviously, one of the biggest issues that we're facing going
forwards is with Internet of Things. I think we have been seeing a lot of
people forgetting all the things that we've learned in desktop and server
security over the years, as they rush into getting things out there, Internet‑connected.
Often,
it's easy to have a good idea about Internet‑connecting something and building
a service around it. It's less easy to think about the security implications of
doing that in a hasty manner.
We've
been talking with a number of players in this space about, "How do we
adapt some of the programs we've already built for improving the security
process in open‑source projects to apply those to the development of IoT
devices?" I think that we can do quite a lot in that space, just with the
tools we've already got, tuning them to the appropriate community.
Gordon:
Anything else that you'd like to talk about?
Nicko:
One of the biggest issues that we face is improving the security outcomes
in open source is to encourage open‑source developers to think about security
as a high priority, as high a priority as performance or scalability or
usability.
We've
got to put security up there as one of the top priority list items. We also
have to make sure that, because most open‑source projects get developed in a
very collaborative way with a community around them, that you get buy‑in to
that taking it as a priority across the whole community.
That's
the best first step to getting good security outcomes, is to have people think
about security early, have them think about it often, and have them keep it as
a top‑of‑mind priority as they go through the development process. If they do
that, then you can get very good security outcomes just by using the same
practices we use everywhere else in software engineering.
Gordon:
In one of the areas I work around DevOps and continuous integration and
application platforms, like one of the terms that's starting to go off currency
is a DevSecOps term, and the push‑back of that is, "Oh, we know security
needs to be in DevOps." Well, if you know it, it doesn't happen a lot of
the time.
Nicko:
I think that's true. I think it's a question of making sure that you have
it as a priority. At my last company, I was actively involved in doing high‑security
software, but we were using an agile development process.
We
managed to square those two by making sure the security was there in the
documentation as the definition of done. You couldn't get through the iterative
process without making sure that you were keeping the threat models up to date
and going through the security reviews.
Code
review ought to involve security review as well as just making sure that the
tabs are replaced by four spaces. We need to integrate security into the whole
process of being a community of developers.
Gordon:
One other final area, and it's probably less under the purview of something
like the CII, but as we've been much talking about in this conference, open
source has become pervasive, and that's obviously a great thing.
It
also means that people are in the position of grabbing a lot of code ‑‑
perfectly legally ‑‑ from all kinds of different repositories and sticking it
into their own code, and it may not be the latest version, it may have
vulnerabilities.
Nicko:
Absolutely, and I think, key to keeping yourself safe as a consumer of
open source...
Well,
there are probably two things there. One is you need to know what you've got in
your products, whether you built them yourself or whether you brought them in,
there's going to be open source in there.
You
need to know what packages are in there, you need to know what versions of
packages are in there. You need to know how those are going to get updated as
the original projects get updated. That whole dependency tracking needs to be
something that you think about as part of your security operations process.
The
other bit is, get involved. Open‑source projects fail when the community around
them fails. If you want a good security outcome from the open‑source projects
that you use, get involved. Don't just complain that it doesn't work, come up
with a good diagnose bug report and file it.
Maybe
produce a patch, and even if you don't produce the patch that gets accepted,
you've given them the idea for how to fix it, and they'll go and recode it in
their own style. If you're going to be dependent on the security of this
project, put an engineer on it.
Get
involved in these projects, and that's the way to make sure that you get really
good security outcomes, is for people who care about the security of these
products to get involved.
Gordon:
Well, I think that's as good a finish as any! Thank you.
Podcast: Open source and cloud trends with IDC's Al Gillen
Al Gillen is responsible for open source research and oversees developer and DevOps research at IDC. Al gave a keynote at the Open Source Leadership Summit at which he provided some historical context for what's happening today in open source and presented recent research on digital transformation, commercial open source support requirements, and how organizations are thinking about cloud-native architecture adoption and deployment.
Listen to the podcast for the whole conversation but a few specific points that Al made were:
Digital transformation can be thought of as taking physically connected systems and logically connecting them, i.e. connecting the processes, the data, and the decision-making.
It's important to bridge new cloud-native systems to existing functionality. Organizations are not going to be rewriting old applications for the most part and those "legacy" systems still have a great deal of value.
Enterprises are asking for open source DevOps tools, but most are specifically asking for commercially-supported open source tools.
Audio:
Link to MP3 (00:15:46)
Link to OGG (00:15:46)
Transcript:
Listen to the podcast for the whole conversation but a few specific points that Al made were:
Digital transformation can be thought of as taking physically connected systems and logically connecting them, i.e. connecting the processes, the data, and the decision-making.
It's important to bridge new cloud-native systems to existing functionality. Organizations are not going to be rewriting old applications for the most part and those "legacy" systems still have a great deal of value.
Enterprises are asking for open source DevOps tools, but most are specifically asking for commercially-supported open source tools.
Audio:
Link to MP3 (00:15:46)
Link to OGG (00:15:46)
Transcript:
Gordon
Haff: Hi, everyone. Welcome to another edition of the "Cloudy
Chat" podcast. I'm here at the Open Source Leadership Summit with Al
Gillen of IDC, who gave one of the keynotes this morning. Welcome, Al. How
about giving a little background about yourself?
Al
Gillen: Hey, Gordon, thanks a lot. Thanks, everybody for listening.
This is Al Gillen. I'm a group vice president at IDC. I'm responsible for our
open source research, and oversee our developer and DevOps research.
Gordon:
One of the things you went through in your keynote this morning was the
historical perspective of how Linux is developed. Both of us have pretty much
been following Linux from the beginning, certainly from its beginnings as
something that was interesting commercially. Maybe you could recap that in a
couple of minutes or so.
Al:
I actually went back to a presentation I delivered to the Linux Foundation,
an event at the Collaboration Summit that was back in 2008. I pulled those
slides up, because I was curious. "What can I learn from what we talked
about back then, and how does that relate to what's going on in the industry
today?"
I
went back, and I pulled up the deck. I was looking at some of the things that I
thought were really interesting. For example, I was looking at one of the first
pieces of data, which compared perceptions of Linux from 1999 and 2001.
Remember
what the time frame was there. Linux had only just begun to be commercially
accepted in the '99‑2000/2001 time frame. One of the things that served as a
significant accelerator for Linux at that time frame was the dot‑com bust.
What
happened then is we had a big contraction in the stock market. Most large
companies, what they did is they went and they started to cut costs. We all
know that one of the places they first cut costs is IT.
Suddenly,
the IT departments were charged with standing up new Web servers and new
network‑infrastructure servers and so forth, and they had no budget to do it.
What did they do?
They
went and they got a free copy of Linux. They recycled a well‑equipped PC or x86
server that had been taken out of service, and they turned it into a Linux
server.
When
we look back at the data that we saw then, really, one of the big drivers for
Linux was initial price. People said, "Yeah, it was great. The cost was
really low." One of the things that was also amazing was the users back
then rated the reliability of Linux as very, very high.
In
fact, when you compare it to other operating systems, it compared very
favorably to much more mature operating systems. That context was really
fascinating, but when you think about it, that was just the beginning of a long
gestation period for Linux.
Over
the next, what, seven, eight, nine years, as Linux became a truly mature and a
truly robust commercial operating system that had both the features, had the
application portfolio, and had the customer base to use it, it took, basically
a decade to get there.
Gordon:
You've been doing some more research recently. What are your numbers
showing today?
Al:
A couple of things that I showed in the presentation today. One is we
presented data on Linux operating system shipments. One of the things that's
happened over the last few years is that Linux has continued to accelerate, in
part because of the build‑out of cloud.
Most
of the public cloud infrastructure, with the exception of the Microsoft Azure
cloud, is almost all Linux. To the extent that Google continues to build out
and Amazon continues to build out, and companies like that ‑‑ Facebook,
Twitter, and so forth, it's primarily Linux being stood up.
That
has driven the growth of non‑commercial Linux, meaning distributions that are
not supported by a commercial company you might think of, rather are either
they're CentOS, or they're Debian, or potentially Ubuntu, that's not supported,
things like that, as well as Amazon Linux, and Google's own Linux and so forth.
That's
been really where a lot of the growth is, but that's not to say that there
hasn't been growth in the commercial side of the market. There's been growth
there as well.
Gordon:
What are some of the drivers that you hear? I know you did some research
for us. You also have some research here around commercially‑supported environments
and maybe some of the reasons why people buy those.
Al:
That has been something which has been really consistent through the
years. We find that large enterprise organizations have a tendency to prefer
commercially‑supported software.
That
has always been the case with Linux and yes, we find that there is [also] non‑commercial
Linux. You'll talk to any enterprise ‑‑ and you could talk to a really big Red
Hat shop or a big SUSE shop, and you ask them, "What is in your
infrastructure?" They'll typically tell you that, "Yeah, we're 95
percent or 98 percent Red Hat, but we've also got some CentOS," or,
"We've got some Debian," or, "We've got SUSE for this one
application."
They
generally have a mix of other things in there. The same thing, if you talk to a
SUSE shop, where they'll say, "Yeah, we're mostly SUSE, but we've got some
openSUSE," or again, "We've got some Ubuntu or CentOS, or something
else in the mix."
The
reason why is that these things get stood up for work loads that are considered
not critical. Maybe they might be something simple like a DNS server, maybe
something that's a print file server, or maybe a Web server which is providing
some internal Web serving capabilities, something that's not critical if it
disappears off the network suddenly. There's not going to be customers that are
left hanging.
Gordon:
Let's switch gears and talk about digital transformation. This is one of
those terms that I think is almost a cliche at this point, at least when you go
to these types of events, because we hear it at every one of these events.
As
somebody I was talking to recently said, just because it's cliche doesn't mean
it's not an important trend. What are some of the things that IDC is seeing
about digital transformation?
Al:
If we go out to, say 2025, and we look back, I believe that we're going
to look back and say, "Yeah, the mid‑teen years were important years from
a digital transformation perspective."
When
we at IDC talk about digital transformation, what we're really talking about is
we're talking about the interconnection of all of the systems that are in our
environments.
When
I say interconnection, we're not talking about getting them all on the same
network. We've done that. It's been done for 20 years, already. What we're
talking about is interconnecting the processes, interconnecting the data,
interconnecting the decision‑making. In many cases, that's not done.
We've
got systems that are physically connected, but are not connected from a logical
sense. That's what's happening with digital transformation. I might add that
the way we expect that that's going to happen is it's going to be a model where
we're going to be building new applications that are going to essentially
bridge the existing functionality that's on these servers.
We're
not going to be rewriting those old applications in a cloud‑native format, for
example. We're going to keep those applications. We may wrap them with some
consistent API so we can get access to the logic and the data.
But
at the end of the day, the business value that's in those applications that are
in place today remains valuable, and frankly, it's going to mean that the
applications themselves and the servers that they run on are workloads that are
going to be around for the long term.
Gordon:
I think that's a really important point, because one of the things that I
hear a lot when I talk to customers is the importance of this bridging of the
older systems that may be modernizing, but as you say, you're not turning them
into cloud‑native systems.
On
the other hand, you have these cloud‑native infrastructures. I think probably
in the industry, there's too much thinking of those as two disconnected
islands, and not enough thinking of the bridges, the integrations, and so
forth, between them.
Al:
There's a really good parallel here, and I like to bring this story up,
especially when I'm in a room full of end users. I like to say, "You guys
remember what you were doing in 1999?" You get a little bit of a quizzical
look, and I say, "Were you remediating any applications that had ywo digit
date codes?"
People
start nodding their heads. The next question I say is, "What did you do?
Did you fix them?" and the heads keep nodding. I say, "OK, can I see
a show of hands? How many of you have gotten rid of those systems, or do you
still have them in use?"
All
those same people, they put their hands up, reluctantly, I might add, and say,
"Yeah, we still have those systems in use." The point being is that
the value of the systems does not go away. The value of the systems is in the
data. It's in the processes and the business logic that are coded in those
applications.
Going
forward, we think the same thing's going to be true for the distributed
computing environment. All of the Linux servers that you have in place, all of
the Windows servers you have in place, have real important business value. The
logic and the data there is really valuable to your business, which means that
you're going to want to use that going forward.
I
do agree with you, when we build that cloud‑native applications, they're going
to help bridge these systems, but don't for a minute assume that those old
systems have no value left.
Gordon:
As we talk about the new applications that are going to be required for
this digital transformation, what are some of the...I think you even used the
term, they were the "pivot point," this morning. Tell us a little bit
more about that.
Al:
Again, taking a long view, so if we look out, say, if you go out to 2025,
and you look back, I think that we'll be able to draw a line in history and
say, "Somewhere between 2015 and 2017 or 2018, there's going to be this
line where everything before that line will be considered a legacy application,
and everything after that line is probably going to be a cloud‑native and a
modern application."
Again,
let's not associate the term legacy application with something that has no
value. Let's assume that that's an architectural statement more than anything
else.
When
I think about it, I believe we're right in the midst of this transition where
we begin to build all of our applications using a cloud‑native format, which
means that our applications are built to run on‑prem in private cloud, or off‑prem
in public cloud, which means that we have flexibility on where they run, how we
want to run them, how we want to scale them.
The
other thing I might add is remember that cloud‑native and cloud‑scale are not
necessarily the same thing. There's lots and lots of applications that should
be cloud‑native, but not all applications have to have cloud scale.
Take
for example your average enterprise. You've got ‑‑ pick your number ‑‑ it's 1
thousand, 10 thousand, 100 thousand users, whatever the number is, that access
your business applications. That number does not scale up to 1 million or 10
million overnight.
By
comparison, somebody who's doing, say, business to consumer, where there could
potentially be a consumer event that causes everybody to come in and access
that application. There you'd need to have the ability to do cloud‑level
scaling.
Gordon:
Al, going back to the commercial support still being important, you
certainly see in the cloud an awful lot of this consumption of free software,
but you mentioned earlier that enterprises by and large do want commercialized
tools. We're not talking just operating systems here.
Al:
No, in fact, operating systems are probably one of the best understood
pieces of open‑source software today. As we go up the stack, customers still
see value associated with commercialization, so a company that will take your
project and make it something that is consumable will provide the support.
The
reason why that's so valuable is that then the company does not have to have
the expertise on staff. You don't have to have a kernel guy, you don't have to
have a guy that knows how to patch Xen, for example, or a KVM if there's a
problem.
It's
really important to have that taken care of by somebody if you're a commercial
organization, which is in the business of selling widgets or manufacturing
things, or providing health care. That's your primary business. Your primary
business is not being in the business of IT.
We
ran a survey earlier this year. I guess it was actually late in 2016. We were
talking to people about their consumption of DevOps products. We asked a
question which I think was really interesting.
The
question was if they have a chance to buy a product, are they going to look at
a product which is open‑source, or are they going to look at a product which is
likely to be a closed‑source and/or a proprietary‑type product?
What
we found is we asked people to rank their preference on these things. The
answer came back as 45 percent of the people we talked to ranked their
preference for an open‑source base product as their first choice over anything
else.
If
you asked them what their preferences were, for example, for a proprietary
product, only 15 percent of the people said that was their first choice.
The
reason why this is really interesting is that when we look at that, these companies
are telling us that they want an open‑source base product, but they also told
us that they wanted it to be commercially supported.
They
could get the bits and run it as a project themselves. We asked that question
as well. That's not what they're asking for. They're asking for it to be
commercially supported. The reason why is if it breaks, you pick up the phone
or you get on your computer and send an email, and you say, "Fix it."
Gordon:
That was true going back with Linux in the early days as well. I think
one of the things that's happening today when you look at DevOps tools is there
is an incredible amount of innovation and number of products out there. That's
good news.
The
bad news is there's an incredible amount of innovation, rapidly changing
products, and a need to integrate all of those together.
Al:
You know what, Gordon? It's one of the challenges that we've had with
fast‑moving markets like this. When I say markets, I'm referring to open‑source,
collectively.
The
problem we have is that the technology changes so fast that the people, the end‑user
organizations, are not able to gain the skills fast enough to keep up with
these technology changes.
Frankly,
when we asked questions about Linux in the early 2000s, people said one of
their top challenges was having the skill set to support Linux. Today, we find
the same questions about things like container technology and using DevOps
tools.
Gordon:
To net it all out and close this out, what are some of the
recommendations, the guidance, that you give on probably pretty much a daily
basis to your clients?
Al:
There's a few things. Number one, recognize that cloud‑native
applications are going to be architected very differently than classic
applications. That's pretty much a given, but when you think about it, it
affects your choice of tools, it affects your choice of deployment scenarios,
and it affects your skills that you need to have on staff.
Another
thing is recognize that we've moved to an era of platform independence far
beyond anything we ever had before. We always like to talk about platform
independence, but we've never really had it.
Now,
with container technology and the ability to produce a true cloud‑native
application that's running on some kind of a framework which happens to be available
on‑prem or in cloud, you suddenly have the ability to move that application on‑prem
or off‑prem, or both ‑‑ run in both places at the same time if so you choose ‑‑
and be able to do that in a way that's been unprecedented in our industry.
Finally,
just to reiterate the other point is recognize that the existing applications
don't lose their value. They still have value. Yes, they may get bundled up in
a VM, or maybe packaged up in container and dropped into somebody's IS cloud,
but they're going to be around for the long term, and recognize that that's
something you have to support.
Again,
driving home that point I made earlier, recognize that all the new applications
we build today are going to have to bridge the classic applications we've had,
and the data that those applications support together with the modern things
that we're going to be doing with our new applications.
Subscribe to:
Posts (Atom)