Edge to Cloud – Connect Worldwide https://connect-community.org Your Independent HPE Technology User Community Fri, 09 Feb 2024 18:32:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://connect-community.org/wp-content/uploads/2021/10/cropped-favicon-2-32x32.png Edge to Cloud – Connect Worldwide https://connect-community.org 32 32 Call For Papers! Would you like to present at HPE Discover 2024 in Las Vegas? https://connect-community.org/call-for-papers-would-you-like-to-present-at-hpe-discover-2024-in-las-vegas/ https://connect-community.org/call-for-papers-would-you-like-to-present-at-hpe-discover-2024-in-las-vegas/#respond Thu, 08 Feb 2024 17:27:12 +0000 https://connect-community.org/?p=64047

The Call for Papers is Now Open.

Are you ready for HPE Discover Las Vegas the week of June 17, 2024?  Connect invites you to share  your customer stories that focus on solutions for our popular Tech Forum series at HPE Discover in Las Vegas.

We need your customer stories for networking, security, IoT, AI, mobility, and cloud innovations.

Do you have a breakout session for Discover? Share your solutions with your global HPE User Community.

Connect invites you to share a 30-minute presentation at HPE Discover focusing on solutions with your HPE technology investments. Your presentation will be followed by a 30 minute informal round table discussion moderated by an HPE technology expert.

This is your opportunity to make connections that count.

  • Do you have firsthand experience with HPE products and solutions—and a unique point of view?
  • Are you eager to share your expertise with fellow attendees?
  • Do you want to be recognized as a thought leader within the networking community?

Connect Tech Forums in Las Vegas are scheduled on Tuesday, June 18, and Wednesday, June 19. All sessions are advertised in the official Discover Agenda/Live Catalog, and all accepted speakers will receive a complimentary Speaker registration to HPE Discover Las Vegas.

The deadline for consideration is March 1st.

In order to submit your session for consideration, we need:

    • Session Title
    • Session Abstract
    • 500 characters MAX (including spaces)
    • Use as many keywords as possible for maximum search-ability
    • Speaker Name and Email Address
]]>
https://connect-community.org/call-for-papers-would-you-like-to-present-at-hpe-discover-2024-in-las-vegas/feed/ 0
Do you have a customer story to share at HPE Discover 2023 in Las Vegas? https://connect-community.org/do-you-have-a-customer-story-to-share-at-hpe-discover-2023-in-las-vegas/ https://connect-community.org/do-you-have-a-customer-story-to-share-at-hpe-discover-2023-in-las-vegas/#respond Tue, 07 Mar 2023 18:07:20 +0000 https://connect-community.org/?p=61452
Are you ready for HPE Discover Las Vegas on June 20-22, 2023?  Connect is looking for customer stories that focus on solutions for our popular Tech Forum series at HPE Discover in Las Vegas.

Do you have a breakout session for Discover? Share your solutions with your global HPE User Community.

Connect invites you to share a 30-minute presentation at HPE Discover focusing on solutions with your HPE technology investments. Your presentation will be followed by a 30 minute informal round table discussion moderated by an HPE technology expert.

This is your opportunity to make connections that count.

Connect Tech Forums in Las Vegas are scheduled on Tuesday, June 20, and Wednesday, June 21. All sessions are advertised in the official Discover Agenda/Live Catalog, and all accepted speakers will receive a complimentary Speaker registration to HPE Discover Las Vegas.

The deadline for consideration is March 20th.

In order to submit your session for consideration, we need:

    • Session Title
    • Session Abstract
    • 500 characters MAX (including spaces)
    • Use as many keywords as possible for maximum search-ability
    • Speaker Name and Email Address
]]>
https://connect-community.org/do-you-have-a-customer-story-to-share-at-hpe-discover-2023-in-las-vegas/feed/ 0
Intel and HPE align around diverse AI data and processing units https://connect-community.org/intel-and-hpe-align-around-diverse-ai-data-and-processing-units/ https://connect-community.org/intel-and-hpe-align-around-diverse-ai-data-and-processing-units/#respond Wed, 07 Sep 2022 18:00:10 +0000 https://connect-community.org/?p=60530

In many industries, ’tis the season for artificial intelligence (AI). In some ways it’s a redux of the early days – in 1956, to be precise – immediately following the coining of the term “AI”, at Dartmouth College in New Hampshire, USA.

Back then, AI investments among competing countries created an effective AI “arms race”. There were controversies, with early applications being developed in gaming, robotics, and autonomous vehicles. Sound familiar? Well, innovation is either something new, or something nobody remembers. But today’s modern AI is a mixture of both, and HPE and Intel® have the something new part.

Machine learning comes to the distributed edge

With the newly announced HPE Swarm Learning, the industry’s first privacy preserving, decentralized machine learning solution, HPE is bringing AI to distributed enterprise edges – where the action is – and where large data sets contain pent-up insights that directly affect business outcomes. With HPE AI Swarm Learning, multiple geographically dispersed locations contribute to the machine learning, rather than depending on the limitations of a single location. When the training begins, the accuracy and efficacy of the AI algorithms benefit from diverse data sets applied to the problem at hand.

This approach to AI, based on inclusive data from multiple distributed edges, aligns well with the Intel’s new Data Center GPU Flex Series, which is targeted for AI, along with cloud gaming and enterprise graphics. It supports an open, flexible, standards-based software stack – and its unified programming model means developers can quickly deploy applications on Intel GPUs or combined CPU/GPU-based systems.

HPE Swarm Learning accommodates a diversity of data from edge to cloud, and paired with Intel Flex Series GPU open software a diversity of processing unit configurations is enabled. These flexibilities can enhance AI IT deployments, which are particularly useful in distributed edge-to-cloud architectures. That is, it won’t be necessary to replicate IT infrastructures at every edge and, as common production code may run at many existing edge locations.

 

Enhanced, secure data management – even at remote edges

Further, edge data can remain at the edge while HPE AI Swarm Learning transmits just the insights on the data. This feature, coupled with Intel Software Guard Extensions on HPE servers with Intel Xeon® Processors, affords secure data management at remote edges. With a choice of the Intel Flex Series 140 an Intel Flex Series 170 GPUs, the diversity of deployment options are further enhanced.

This makes remote enterprise edges secure and smarter and puts them to work faster in support of business outcomes.

Discover more! Learn how HPE and Intel digitally transform edge-to-cloud platforms with AI.

Meet the author, Dr. Tom Bradicich!  

Dr. Tom Bradicich began his career at IBM serving as an IBM Fellow, Server CTO, R&D VP, and Distinguished Engineer. He led teams to conceive and develop the new product categories of private on-premises clouds, Converged Systems/HCI, predictive analytics SW for Windows™, cofounded several industry standards, and was elected to the IBM Academy of Technology. While at National Instruments, an industrial OT company, he served as a NI Fellow, leading teams to pioneer today’s modern OT/IT convergence, industrial systems reliability, and big analog data™ solutions.

In 2021 Tom was named a Top IoT Influencer by Onalytica, IoT Czar of the Year by IoT Innovator, Top IIoT Influencer by CB Tech, and CRN’s Top 100 Executives and Top 25 Disrupters for 3 years. He was inducted into the NC State University Alumni Hall of Fame, and received the IBM Chairman’s Award.

Currently, Tom is an HPE Fellow, heading marketing initiatives such as HPE solutions stacks, developing and delivering marketing collateral, sales training, and innovative partner GTM programs. He has held various roles at HPE such as GM & VP of the Servers and Edge Systems & SW BU with P&L responsibility, which was HPE’s fastest growing BU. As VP of Server Engineering, and HPE Edge & IoT SW Labs Director, Tom led teams to conceive and launch HPE’s first Edge/IoT corporate strategy, the new product category Converged Edge Systems, Edge as-a-Service SW, and industrial data management SW.

Throughout his career, Tom and his teams developed, launched, and sold dozens of SW and systems products, receiving many analyst, media, and industry awards. He holds several US patents, was executive sponsor for the IBM Women’s Inventors Network, and currently advises financial and industry analysts. Tom served on the Board of Directors of Aspen Technology, a public industrial AI SW company, and the advisory boards of three SW and silicon chip start-ups. He frequently delivers keynotes and media interviews, is an advisor to womenincloud.com, University of Florida Advisory Board and Diversity Committee, and founded the charity www.sockrelief.com.

]]>
https://connect-community.org/intel-and-hpe-align-around-diverse-ai-data-and-processing-units/feed/ 0
HPE GreenLake Selected by Worldline to Modernize Mission-Critical Payments https://connect-community.org/hpe-greenlake-selected-by-worldline-to-modernize-mission-critical-payments/ https://connect-community.org/hpe-greenlake-selected-by-worldline-to-modernize-mission-critical-payments/#respond Tue, 29 Mar 2022 19:23:38 +0000 https://connect-community.org/?p=59739

 

Largest European digital payments provider selects HPE GreenLake edge-to-cloud platform to accelerate digital transactions and deliver exceptional customer experience

BRUSSELS–(BUSINESS WIRE)–Mar. 22, 2022– Hewlett Packard Enterprise (NYSE: HPE) today announced that Worldline, a European-based global payments provider, has selected the HPE GreenLake edge-to-cloud platform to implement a major performance upgrade to its mission-critical payments platform to meet the accelerated growth of online transactions. By leveraging HPE GreenLake’s flexible as-a-service model and HPE Financial Services’ asset renewal program, funding approximately 25% of the platform refresh, Worldline achieved this significant upgrade with no upfront investment.

Worldline is the largest European and the world’s fourth-largest payment provider, with operations in over 50 countries. The company provides payments and transactional services to the full supply chain – from seller to buyer — and has been delivering on a vision of a cashless economy by developing agile, customer-centric solutions that are rooted in strong technology. The industry relies heavily on back-end infrastructure to ensure the payment value chain is resilient, highly available, and always on. Latency and downtime on card transactions have an immediate impact on the customer experience.

The COVID-19 pandemic has had an enormous impact on the finance and banking sector with significant fluctuations in demand and revenue. Over the same period there has been a massive growth in online purchasing as more countries transform into cashless societies resulting in an even greater reliance on secure global payment transactions. Worldline, like many payment providers, needed to quickly and efficiently scale-up its offering to support its vendors and customers with the growing volume of digital payments, further improving its existing reliable estate of servers and storage that needed to meet the current and future demand and reduce ongoing operational costs.

The HPE GreenLake platform can scale up and down as business demand fluctuates so that Worldline can manage the growth in demand. The platform delivers the cloud experience through a pay-per-use model while also meeting compliance and regulatory requirements.

“Working with HPE has enabled us to navigate the turbulent market caused by COVID-19 via a combination of legacy asset buy-back and a flexible as-a-service approach,” said Frédéric Papillon, Managing Director Production Systems at Worldline. “We feel confident that these solutions will enable us to offer our customers a highly secure and efficient platform for digital transactions that keeps the supply chain moving, and also provide a cost-competitive solution that delivers value for money.”

The HPE GreenLake platform solution leverages HPE NonStop systems, which are ideally suited to support payment transactions with trusted, reliable, 100% fault-tolerant capabilities required for mission-critical environments. HPE NonStop meets the stringent demands of a secure financial transaction workload, with the always-on, resilient design ensuring operational excellence and exceptionally high levels of availability. The solution was identified as the best cloud service to support Worldline’s mission-critical business with the agility and flexibility to deliver an accelerated time-to-market for new applications and services.

The old data center IT infrastructure will be decommissioned after a transition in a safe and sustainable manner in HPE’s Technology Renewal Center to minimize electronic waste. The transformation to the new platform will be managed by HPE Pointnext Services in close collaboration with Worldline, without any disruption to the existing workflows.

“Together, the HPE GreenLake platform and HPE Financial Services capabilities will allow Worldline to achieve significant cost savings, and also power a significant refresh and modernization of their mission-critical payments platform,” said Gilles Thiebaut, Managing Director, SVP Global Sales for Northern Western Europe. “This upgrade, delivering improved performance and extending the operational life, was achieved without any upfront investment and is delivered on a pay-per-use model so the solution meets both their technical and financial requirements.”

About HPE GreenLake

The HPE GreenLake edge-to-cloud platform enables customers to accelerate data-first modernization and provides over 50 cloud services that can run on-premises, at the edge, in a colocation facility, and in the public cloud. In Q1 2022, HPE reported Annual Recurring Revenue of $798 million and as-a-service orders growth of 136 percent year-over-year. In April 2022, with the onboarding of Aruba customers, HPE will add over 120,000 customers to the HPE GreenLake platform. The scalable, pay-as-you go HPE GreenLake platform also delivers robust security, compliance, and control, and supports a broad partner ecosystem – including channel partners, distributors, independent software vendors, public cloud providers, service providers, and system integrators. For more information on HPE GreenLake, please visit: https://www.hpe.com/us/en/greenlake.html.

About Worldline

Worldline [Euronext: WLN] is the European leader in the payments and transactional services industry and #4 player worldwide. With its global reach and its commitment to innovation, Worldline is the technology partner of choice for merchants, banks and third-party acquirers as well as public transport operators, government agencies and industrial companies in all sectors. Powered by over 20,000 employees in more than 50 countries, Worldline provides its clients with sustainable, trusted and secure solutions across the payment value chain, fostering their business growth wherever they are. Services offered by Worldline in the areas of Merchant Services; Terminals, Solutions & Services; Financial Services and Mobility & e-Transactional Services include domestic and cross-border commercial acquiring, both in-store and online, highly secure payment transaction processing, a broad portfolio of payment terminals as well as e-ticketing and digital services in the industrial environment. In 2020 Worldline generated a proforma revenue of 4.8 billion euros. Please visit: worldline.com

About Hewlett Packard Enterprise

Hewlett Packard Enterprise (NYSE: HPE) is the global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way people live and work, HPE delivers unique, open and intelligent technology solutions delivered as a service – spanning Compute, Storage, Software, Intelligent Edge, High Performance Computing and Mission Critical Solutions – with a consistent experience across all clouds and edges, designed to help customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: www.hpe.com

]]>
https://connect-community.org/hpe-greenlake-selected-by-worldline-to-modernize-mission-critical-payments/feed/ 0
Do you have a customer story to share at HPE Discover Las Vegas? https://connect-community.org/do-you-have-a-customer-story-to-share-at-hpe-discover-las-vegas/ https://connect-community.org/do-you-have-a-customer-story-to-share-at-hpe-discover-las-vegas/#respond Thu, 13 Jan 2022 19:00:00 +0000 https://connect-community.org/?p=58371
Are you ready for HPE Discover Las Vegas on June 28-30, 2022?  Connect is looking for customer stories that focus on solutions for our popular Tech Forum series at HPE Discover in Las Vegas.

Do you have a breakout session for Discover? Share your solutions with your global HPE User Community.

Connect invites you to share a 30-minute presentation at HPE Discover focusing on solutions with your HPE technology investments. Your presentation will be followed by a 30 minute informal round table discussion moderated by an HPE technology expert.

This is your opportunity to make connections that count.

Connect Tech Forums in Las Vegas are scheduled on Tuesday, June 28, and Wednesday, June 29. All sessions are advertised in the official Discover Agenda/Live Catalog, and all accepted speakers will receive a complimentary Speaker registration to HPE Discover Las Vegas.

The deadline for consideration is April 10th

In order to submit your session for consideration, we need:

    • Session Title
    • Session Abstract
    • 500 characters MAX (including spaces)
    • Use as many keywords as possible for maximum search-ability
    • Speaker Name and Email Address
]]>
https://connect-community.org/do-you-have-a-customer-story-to-share-at-hpe-discover-las-vegas/feed/ 0
Fear of missing out on this year’s best HPE Ezmeral blogs? https://connect-community.org/fear-of-missing-out-on-this-years-best-hpe-ezmeral-blogs/ https://connect-community.org/fear-of-missing-out-on-this-years-best-hpe-ezmeral-blogs/#respond Wed, 12 Jan 2022 19:35:13 +0000 https://connect-community.org/?p=58258

Did you miss some HPE Ezmeral blogs this past year? No worries – we got you covered: announcing The great HPE Ezmeral blog countdown – top 10 posts of 2021!

To celebrate the end of a fantastic year, we’ve curated the top 10 HPE Ezmeral blog posts published in 2021 on the HPE Ezmeral blogsite and the HPE Dev site. We’re counting down to the #1 blog on each, so grab your favorite beverage and settle in – lots of great reading below!

Let’s start with the HPE Ezmeral: Uncut blogsite - Top posts of 2021

#5 — EMA names HPE Ezmeral software Value Leader

The 2021 version of the EMA Radar Report was released in May 2021 and HPE Ezmeral won Value Leader across the three use cases driving the unification of data warehouses and data lakes. In this blog, Joann Stark gives the details and tells us why Enterprise Management Associates (EMA) calls HPE Ezmeral Software “the most enterprise-ready, open source, data lake and analytics platform in the market.”

#4 — Big HPE Ezmeral news: New cloud-native unified analytics and more!

Calvin Zito cracked the top five list with his first-ever HPE Ezmeral post about the HPE Ezmeral 9/28/21 announcement with HPE GreenLake edge-to-cloud platform. His blog summarizes the news and points us to the Chalk Talk he created to explain the latest announcement in more detail.  

#3 — To the edge and back again: Meeting the challenges of edge computing

Some people think of edge computing as a glorified form of data acquisition or a local digital process control. Yet, edge is much more than both of those. To better address the challenges of edge systems, it’s key to understand what happens at the edge, at the core, and in between. Ellen Friedman describes this process and reminds us that a surprising challenge of edge systems is the efficient traffic not only from edge to core but also back again.

#2 — HPE, Intel, and Splunk have done it again!

Coming in at #2, this blog written by two of our HPE Ezmeral Experts, Elias Alagna and Rajesh Vijayarajan, details how the team was determined to push Splunk’s ingest performance and test new components (compared with testing done in 2020). At 10.4 TB per day of ingest per server, the HPE, Intel, Splunk solution is now performing 20.8 times more indexing throughput than our baseline test result at 500 GB per day per server of ingest.

We’ve counted down to the top performer on HPE Ezmeral: Uncut. Now it's time to reveal the winner … drumroll please!

#1 —  HPE Ezmeral 5.3 puts the “EZ” into Analytics, DataOps, and App Modernization

This blog describes how the HPE Ezmeral Container Platform (now named HPE Ezmeral Runtime Enterprise) and ML Ops 5.3 make it simple to industrialize data science. This EZ release includes Apache Spark™ and MLflow integration, plus improvements for app modernization, data and model collaboration, policy management, and runtime security.

Hang on folks – we’re only half done. Next up is the top HPE Ezmeral posts on the HPE Developer blogsite.

#5 — Data Analytics with PySpark using HPE Ezmeral Runtime Enterprise

PySpark is an interface for Apache Spark™ in Python. Apache Spark is a unified analytics engine for big data processing. It allows developers to perform data processing on files in a distributed filesystem, like the Hadoop distributed filesystem or HPE Ezmeral Data Fabric. Cenz Wong shows us how to run simple Spark jobs using the PySpark module on a Jupyter Notebook cluster instance deployed on HPE Ezmeral Runtime Enterprise.

#4– Application Modernization with the Application Workbench

One of the most significant issues facing enterprises in their journey towards digital transformation is the challenge of application modernization. In fact, 7 in 10 companies today struggle with legacy application maintenance while they tackle their digital transformation. In this post, Sahithi Gunna discusses the different approaches one can take to modernize an application and how the HPE Ezmeral Application Workbench can help.

#3 — Autopilot Kubernetes Deployments on HPE Ezmeral Runtime Enterprise

In this post, Vinothini Raju covers autopilot systems for Kubernetes and how the combination of gopaddle and HPE Ezmeral Runtime Enterprise enables enterprises to speed their modernization journey. She first explains the need for such systems–it all starts with a demand for efficiency of the business operations.

#2 — Accessing HPE Ezmeral Data Fabric Object Storage from Spring Boot S3 Micro Service deployed in K3s cluster 

Containers and microservices are transforming edge and IoT platform use cases that can be deployed in small footprint Kubernetes clusters on edge nodes and persisting data at a central location. This data pipeline can be easily accessed by downstream complex analytics applications for further processing. In this article, Kiran Kumar Mavatoor discusses how to access HPE Ezmeral Data Fabric Object Store (S3) using Spring Boot S3 Micro Service application deployed in a K3s cluster and how to perform basic S3 operations.

And finally, we are ready for the the #1 blog published in 2021 on the HPE Developer site. And the Winner is...

#1 — On-Premise Adventures: How to build an Apache Spark lab on Kubernetes

Apache Spark™ is an awesomely powerful developer tool for finding the value in your data. In this post, Don Wake explains how he deployed Apache Spark in his on-premises HPE Ezmeral Runtime Enterprise-managed lab so he could try out Apache Spark. 

HPE-Happy-Holidays.jpg

There you have it — our top 10 most popular posts published in 2021. I hope you learned something new about HPE Ezmeral and be sure to follow us on LindedIn to stay current on the latest and greatest.

Just in case you’re a glutton for blogs over the holidays, you can read our latest thought leadership content at CIO.com and Forbes.

 

As this year comes to a close and on behalf of the entire HPE Ezmeral team, I want to wish all our readers Happy Holidays!

 

Matt Hausmann
Hewlett Packard Enterprise

twitter.com/HPE_Ezmeral
linkedin.com/showcase/hpe-ezmeral
hpe.com/software

Matt Hausmann

Matt Hausmann

Group Manager - Ezmeral GTM at Hewlett Packard Enterprise

Over the past decades, Matt has had the privilege to collaborate with hundreds of companies and experts on ways to constantly improve how to turn data into insights. This continues to drive him as the ever-evolving analytics landscape enables organizations to continually make smarter, faster decisions.

]]>
https://connect-community.org/fear-of-missing-out-on-this-years-best-hpe-ezmeral-blogs/feed/ 0
Want to manage your total cloud costs better? Emphasize the ‘Ops’ in DevOps, says Futurum analyst Daniel Newman https://connect-community.org/2019-3-13-want-to-manage-your-total-cloud-costs-better-emphasize-the-ops-in-devops-says-futurum-analyst-daniel-newman/ https://connect-community.org/2019-3-13-want-to-manage-your-total-cloud-costs-better-emphasize-the-ops-in-devops-says-futurum-analyst-daniel-newman/#respond Wed, 13 Mar 2019 20:02:08 +0000 https://connect-community.org//2019-3-13-want-to-manage-your-total-cloud-costs-better-emphasize-the-ops-in-devops-says-futurum-analyst-daniel-newman/ Learn ways a managed and orchestrated cloud lifecycle culture should be sought across enterprise IT organizations. 

]]>

The next BriefingsDirect Voice of the Analyst interview explores new ways that businesses can gain the most control and economic payback from various cloud computing models.

We’ll now hear from an IT industry analyst on how developers and IT operators can find newfound common ground to make hybrid cloud the best long-term economic value for their organizations.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help explore ways a managed and orchestrated cloud lifecycle culture should be sought across enterprise IT organizations is Daniel Newman, Principal Analyst and Founding Partner at Futurum Research. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Daniel, many tools have been delivered over the years for improving software development in the cloud. Recently, containerization and management of containers has been a big part of that.

Now, we’re also seeing IT operators tasked with making the most of cloud, hybrid cloud, and multi-cloud around DevOps – and they need better tools, too.

Has there been a divide or lag between what developers have been able to do in the public cloud environment and what operators must be able to do? If so, is that gap growing or shrinking now that new types of tools for automation, orchestration, and composability of infrastructure and cloud services are arriving?

Out of the shadow, into the cloud 

Newman: Your question lends itself to the concept of shadow IT. The users of this shadow IT find a way to get what they need to get things done. They have had a period of uncanny freedom.


Newman

Newman

But this has led to a couple of things. First of all, generally nobody knows what anybody else is doing within the organization. The developers have been able to creatively find tools.

On the other hand, IT has been cast inside of a box. And they say, “Here is the toolset you get. Here are your limitations. Here is how we want you to go about things. These are the policies.”

And in the data center world, that’s how everything gets built. This is the confined set of restrictions that makes a data center a data center.

But in a developer’s world, it’s always been about minimum viable product. It’s been about how to develop using tools that do what they need them to do and getting the code out as quickly as possible. And when it’s all in the cloud, the end-user of the application doesn’t know which cloud it’s running on, they just know they’re getting access to the app.

Basically we now have two worlds colliding. You have a world of strict, confined policies — and that’s the “ops” side of DevOps. You also have the developers who have been given free rein to do what they need to do; to get what they need to get done, done.

Get Dev and Ops to collaborate 

Gardner: So, we need to keep that creativity and innovation going for the developers so they can satisfy their requirements. At the same time, we need to put in guard rails, to make it all sustainable.

Otherwise we see not a minimal viable cloud – but out-of-control expenses, out-of-control governance and security, and difficulty taking advantage of both private cloud and public cloud, or a hybrid affair, when you want to make that choice.

How do we begin to make this a case of worlds collaborating instead of worlds colliding?

Newman: It’s a great question. We have tended to point DevOps toward “dev.” It’s really been about the development, and the “ops” side is secondary. It’s like capital D, lowercase o.

The thing is, we’re now having a massive shift that requires more orchestration and coordination between these groups.

How to Make 

Hybrid IT 

Simple

You mentioned out-of-control expenses. I spoke earlier about DevOps and developers having the free rein – to do what they need to do, put it where they need to put it, containers, clouds, tools, whatever they need, and just get it out because that’s what impacts their customers.

If you have an application where people buy things on the web and you need to get that app out, it may be a little more expensive to deploy it without the support of Ops, but you feel the pressure to get it done quickly.


DevOpsTShirt.jpg

Now, Ops can come in and say, “Well, you know … what about a flex consumption-based model, what about multi-cloud, what about using containers to create more portability?”

“What if we can keep it within the constraints of a budget and work together with you? And, by the way, we can help you understand which applications are running on which cloud and provide you the optimal [aggregate cloud use] plan.”

Let’s be very honest, a developer doesn’t care about all of that. … They are typically not paid or compensated in any way that leads to optimizing on cost. That’s what the Ops people do.

Such orchestration — just like almost all larger digital transformation efforts — starts when you have shared goals. The problem is, they call it a DevOps group — but Dev has one set of goals and Ops has different ones.

What you’re seeing is the need for new composable tools for cloud services, which we saw at such events as the recent Hewlett Packard Enterprise (HPE) Discover conference. They are launching these tools, giving the Ops people more control over things, and — by the way — giving developers more visibility than has existed in the past.

There is a big opportunity [for better cloud use economics] through better orchestration and collaboration, but it comes down to the age-old challenges of having the Dev and Ops people share the same goals.

There is a big opportunity [for better cloud use economics] through better orchestration and collaboration, but it comes down to the age-old challenges inside of any IT organization — and that is having the Dev and the Ops people share the same goals. These new tools may give them more of a reason to start working in that way.

Gardner: The more composability the operations people have, the easier it is for them to define a path that the developers can stay inside of without encumbering the developers.

We may be at the point in the maturity of the industry where both sides can get what they want. It’s simply a matter of putting that together — the chocolate and peanut-butter, if you will. It becomes more of a complete DevOps.

But there is another part of this people often don’t talk about, and that’s the data placement component. When we examine the lifecycle of a modern application, we’re not just developing it and staging it where it stays static. It has to be built upon and improved, we are doing iterations, we are doing Agile methods.

We also have to think about the data the application is consumingandcreating in the same way. That dynamic data use pattern needs to fit into a larger data management philosophy and architecture that includes multi-cloud support.

I think it’s becoming DevDataOps— not just DevOps these days. The operations people need to be able to put in requirements about how that data is managed within the confines of that application’s deployment, yet kept secure, and in compliance with regulations and localization requirements.

DevDataOps emerges

Newman: We’ve launched the DevDataOps category right now! That’s actually a really great point, because if you think about where does all that live — meaning IT orchestration of the infrastructure choices and whether that’s in the cloud or on-premises – there has to be enough of the right kind of storage.

Developers are usually worried about data from the sense of what can they do with that data to improve and enhance the applications. When you add in elements like machine learning (ML) and artificial intelligence (AI), that’s going to just up the compute and storage requirements. You have the edge and Internet of Things (IoT) to consider now too for data. Most applications are collecting more data in real-time. With all of these complexities, you have to ask, “Who really owns this data?”

Well, the IT part of DevOps, the “Ops,” typically worries about capacity and resources performance for data. But are they really worried about the data in these new models? It brings in that needed third category because the Dev person doesn’t necessarily deal with the data lifecycle. The need to best use that data is a business unit imperative, a marketing-level issue, a sales-level data requirement. It can include all the data that’s created inside of a cloud instance of SAP or Salesforce.

How to Solve Cost 

and Utilization Challenges 

of Hybrid Cloud

Just think about how many people need to be involved in orchestration to maximize that? Culturally speaking, it goes back to shared tools, shared visibility, and shared goals. It’s also now about more orchestration required across more external groups. So your DevOps group just got bigger, because the data deluge is going to be the most valuable resource any company has. It will be, if it isn’t already today, the most influential variable in what your company becomes.

You can’t just leave that to developers and operators of IT. It becomes core to business unit leadership, and they need to have an impact. The business leadership should be asking, “We have all this data. What are we doing with it? How are we managing it? Where does it live? How do we pour it between different clouds? What stays on-premises and what goes off? How do we govern it? How can we have governance over privacy and compliance?”

I would say most companies really struggle to keep up with compliance because there are so many rules about what kind of data you have, where it can live, how it should be managed, and how long it should be stored. 

I think you bring up a great point, Dana. I could probably rattle on about this for a long, long time. You’ve just added a whole new element to DevOps, right here on this podcast. I don’t know that it has to do with specifically Dev or Ops, but I think it’s Dev+Ops+Data — a new leadership element for meaningful digital transformation.

Gardner: We talked about trying to bridge the gap between development and Ops, but I think there are other gaps, too. One is between data lifecycle management – for backup and recovery and making it the lowest cost storage environment, for example. Then there is the other group of data scientists who are warehousing that data, caching it, and grabbing more data from outside, third-party sources to do more analytics for the entire company. But these data strategies are too often still divorced.

These data science people and what the developers and operators are doing aren’t necessarily in sync. So, we might have another category, which would be Dev+Data+DataScience+Ops.

Add Data Analytics to the Composition 

Newman: Now we’re going four groups. You are firstly talking about the data from the running applications. That’s managed through pure orchestration in DevOps, and that works fine through composability tools. Those tools provide IT the capability to add guard rails to the developers, so they are not doing things in the shadows, but instead do things in coordination.

The other data category is that bigger analytical data. It includes open data, third-party data, and historical data that’s been collected and stored inside of instances of Enterprise resource planning (ERP) apps and Customer-relationship management (CRM) apps for 20 or 30 years. It’s a gold mine of information. Now we have to figure out an extract process and incorporate that data into almost every enterprise-level application that developers are building. Right now Dev and Ops don’t really have a clue what is out there and available across that category because that’s being managed somewhere else, through an analytics group of the company.

Gardner: Or, developers will have to create an entirely different class of applications for analytics alone, as well as integrating the analytics services into all of the existing apps.

Newman: One of the HPE partners I’ve worked with the in the past, SAS, and companies such as SAS and SAP, are going to become much closer aligned with infrastructure. Your DevOps is going to become your analytics Ops, too.

How to Achieve 

Composability 

Across Your Data Center

Hardware companies have built software apps to run their hardware, but they haven’t been historically building software apps to run the data that sits on the hardware. That’s been managed by the businesses running business intelligence software, such as the ones I mentioned.

There is an opportunity for a new level of coordination to take place at the vendor level, because when you see these alliances, and you see these partnerships, this isn’t new. But, seeing it done in a way that’s about getting the maximum amount of usable data from one system into every application — that’s futuristic, and it needs to be worked on today. 

Gardner: The bottom line is that there are many moving parts of IT that remain disjointed. But we are at the point now with composability and automation of getting an uber-view over services and processes to start making these new connections – technically, culturally, and organizationally.

What I have seen from HPE around the HPE Composable Cloud vision moves a big step in that direction. It might be geared toward operators, but, ultimately it’s geared toward the entire enterprise, and gives the business an ability to coordinate, manage, and gain insights into all these different facets of a digital business.

Companies right now still struggle with the resources to run multi-cloud. They tend to have maybe one public cloud and their on-premises operations. They don’t know which is the best cloud approach because they are not getting the total information.

Newman: We’ve been talking about where things can go, and it’s exciting. But let’s take a step back.

Multi-cloud is a really great concept. Hyper-converged infrastructure, it’s all really nice, and there has been massive movement in this area in the last couple of years. Companies right now still struggle with the resources to run multi-cloud. They tend to have maybe one public cloud and their on-premise operations. They have their own expertise, and they have endless contracts and partnerships.

They don’t know which the best-cloud approach is because they are not necessarily getting that total information. It depends on all of the relationships, the disparate resources they have across Dev and Ops, and the data can change on a week-to-week basis. One cloud may have been perfect a month ago, yet all of a sudden you change the way an application is running and consuming data, and it’s now in a different cloud.

What HPE is doing with HPE Composable Cloud takes the cloud plus composable infrastructure and, working through HPE OneSphere and HPE OneView, brings them all into a single view. We’re in a software and user experience world.

The tools that deliver the most usable and valuable dashboard-type of cloud use data in one spot are going to win the battle. You need that view in front of you for quick deployment, with quick builds, portability, and container management. HPE is setting itself in a good position for how we do this in one place.

How to Remove 

< h3 style="text-align:center;white-space:pre-wrap;">Complexity From 

Multi-Cloud and Hybrid IT

Give me one view, give me my one screen to look at, and I think your Dev and Ops — and everybody in between – and all your new data and data science friends will all appreciate that view.HPE is on a good track, and I look forward to seeing what they do in the future.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

]]>
https://connect-community.org/2019-3-13-want-to-manage-your-total-cloud-costs-better-emphasize-the-ops-in-devops-says-futurum-analyst-daniel-newman/feed/ 0
Why enterprises should approach procurement of hybrid IT in entirely new ways https://connect-community.org/2019-2-13-why-enterprises-should-approach-procurement-of-hybrid-it-in-entirely-new-ways/ https://connect-community.org/2019-2-13-why-enterprises-should-approach-procurement-of-hybrid-it-in-entirely-new-ways/#respond Wed, 13 Feb 2019 20:56:35 +0000 https://connect-community.org//2019-2-13-why-enterprises-should-approach-procurement-of-hybrid-it-in-entirely-new-ways/
Learn why changes in cloud deployment models are forcing a rethinking of IT economics, and maybe even the very nature of acquiring and cost-optimizing digital business services.

]]>

The next BriefingsDirect hybrid IT management strategies interview explores new ways that businesses should procure and consume IT-as-a-service. We’ll now hear from an IT industry analyst on why changes in cloud deployment models are forcing a rethinking of IT economics — and maybe even the very nature of acquiring and cost-optimizing digital business services.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us explore the everything-as-a-service business model is Rhett Dillingham, Vice President and Senior Analyst at Moor Insights and Strategy. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is driving change in the procurement of hybrid- and multi-cloud services?


Dillingham

Dillingham

Dillingham: What began as organic adoption — from the developers and business units seeking agility and speed — is now coming back around to the IT-focused topics of governance, orchestration across platforms, and modernization of private infrastructure.

There is also interest in hybrid cloud, as well as multi-cloud management and governance. Those amount to complexities that the public clouds are not set up for and are not able to address because they are focused on their own platforms.

Learn How to

Better Manage

Multi-Cloud Sprawl

Gardner: So the way you acquire IT these days isn’t apples or oranges, public or private, it’s more like … fruit salad. There are so many different ways to acquire IT services that it’s hard to measure and to optimize. 

Dillingham: And there are trade-offs. Some organizations are focused on and adopt a single public cloud vendor. But others see that as a long-term risk in management, resourcing, and maintaining flexibility as a business. So they’re adopting multiple cloud vendors, which is becoming the more popular strategic orientation.

Gardner: For those organizations that don’t want mismanaged “fruit salad” — that are trying to homogenize their acquisition of IT services even as they use hybrid cloud approaches — does this require a reevaluation of how IT in total is financed? 

Champion the cloud

Dillingham: Absolutely, and that’s something you can address, regardless of whether you’re adopting a single cloud or multiple clouds. The more you use multiple resources, the more you are going to consider tools that address multiple infrastructures — and not base your capabilities on a single vendor’s toolset. You are going to go with a cloud management vendor that produces tools that comprehensively address security, compliance, cost management, and monitoring, et cetera.

Gardner: Does the function of IT acquisitions now move outside of IT? Should companies be thinking about a chief procurement officer (CPO) or chief financial officer (CFO) becoming a part of the IT purchasing equation?

Dillingham: By virtue of the way cloud has been adopted — more by the business units – they got ahead of IT in many cases. This has been pushed back toward gaining the fuller financial view. That move doesn’t make the IT decision-maker into a CFO as much as turn them into a champion of IT. And IT goes back to being the governance arm, where traditionally they been managing cost, security, and compliance.

It’s natural for the business units and developers to now look to IT for the right tools and capabilities, not necessarily to shed accountability but because that is the traditional role of IT, to enable those capabilities. IT is therefore set up for procurement.

IT is best set up to look at the big picture across vendors and across infrastructures rather than the individual team-by-team or business unit-by-business unit decisions that have been made so far. They need to aggregate the cloud strategy at the highest organizational level.

Gardner: A central tenet of good procurement is to look for volume discounts and to buy in bulk. Perhaps having that holistic and strategic approach to acquiring cloud services lends itself to a better bargaining position? 

Learn How to

Make Hybrid IT

Simple

Dillingham: That’s absolutely the pitch of a cloud-by-cloud vendor approach, and there are trade-offs. You can certainly aggregate more spend on a single cloud vendor and potentially achieve more discounts in use by that aggregation.

The rebuttal is that on a long-term basis, your negotiating leverage in that relationship is constrained versus if you have adopted multiple cloud infrastructures and can dialogue across vendors on pricing and discounting.

Now, that may turn into more of an 80/20-, 90/10-split than a 50/50-split, but at least by having some cross-infrastructure capability — by setting yourself up with orchestration, monitoring, and governance tools that run across multiple clouds — you are at least in a strategic position from a competitive sourcing perspective.

The trade-off is the cost-aggregation and training necessary to understand how to use those different infrastructures — because they do have different interfaces, APIs, and the automation is different.

Gardner: I think that’s why we’ve seen vendors like Hewlett Packard Enterprise (HPE) put an increased emphasis on multi-cloud economics, and not just the capability to compose cloud services. The issues we’re bringing up force IT to rethink the financial implications, too. Are the vendors on to something here when it comes to providing insight and experience in managing a multi-cloud market?

Follow the multi-cloud tour guide

Dillingham: Absolutely, and certainly from the perspective that when we talk multi-cloud, we are not just talking multiple public clouds. There is a reality of large existing investments in private infrastructure that continue for various purposes. That on-premises technology also needs cost optimization, security, compliance, auditability, and customization of infrastructure for certain workloads.

Consultative input is very valuable when you see how much pattern-matching there is across customers — and not just within the same industry but cross industries.

That means the ultimate toolset to
be considered needs to work across both public and private infrastructures
. A vendor that’s looking beyond just public cloud, like HPE, and delivers a multi-cloud and hybrid cloud management orientation is set up to be a potential tour guide and strategic consultative adviser. 

And that consultative input is very valuable when you see how much pattern-matching there is across customers – and not just within same industry but across industries. The best insights will come from knowing what it looks like to triage application portfolios, what migrations you want across cloud infrastructures, and the proper set up of comprehensive governance, control processes, and education structures.

Gardner: Right. I’m sure there are systems integrators, in addition to some vendors, that are going to help make the transition from traditional IT procurement to everything-as-a service. Their lessons learned will be very valuable.

That’s more intelligent than trying to do this on your own or go down a dark alley and make mistakes, because as we know, the cloud providers are probably not going to stand up and wave a flag if you’re spending too much money with them.

How to Solve Cost and Utilization

Challenges of

Hybrid Cloud

Dillingham: Yes, and the patterns of progression in cloud orientation are clear for those consultative partners, based on dozens of implementations and executions. From that experience they are far more thoroughly aware of the patterns and how to avoid falling into the traps and pitfalls along the way, more so than a single organization could expect, internally, to be savvy about.

Gardner: It’s a fast-moving target. The cloud providers are bringing out new services all the time. There are literally thousands of different cloud service SKUs for infrastructure-as-a-service, for storage-as-a-service, and for other APIs and third-party services. It becomes very complex, very dynamic.

Do you have any advice for how companies should be better managing cloud adoption? It seems to me there should be collaboration at a higher level, or a different type of management, when it comes to optimizing for multi-cloud and hybrid-cloud economics.

Cloud collaboration strategy 

Dillingham: That really comes back to the requirement that the IT organization partner with the business units. The more business units there are in the organization, the more IT is critical in driving collaboration at the highest organizational level and in being responsible for the overall cloud strategy.

Remove Complexity

From Multi-Cloud

And Hybrid IT 

The cloud strategy across the topics of platform selection, governance, process, and people skills — that’s the type of collaboration needed. And it flows into these recommendations from the consultancies of how to avoid the traps and pitfalls. For example: Avoiding mismanagement of expectations and goals in order to drive clear outcomes on the execution of projects, making sure that security and compliance are considered and involved from a functional perspective all the way through, and on down the list.

The decision of what advice to bring in is really about the topic and the selection on the menu. Have you considered the uber strategy and approach? How well have you triaged your application portfolio? How can you best match capabilities to apps across infrastructures and platforms?

Do you have migration planning? How about migration execution? Those can be similar or separate items. You also have development methodologies, and the software platform choices to best support all of that along with security and compliance expertise. These are all aspects certain consultancies will have expertise on more than others, and not many are going to be strong across all of them. 

Gardner: It certainly sounds like a lot of planning and perhaps reevaluating the ways of the past. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

]]>
https://connect-community.org/2019-2-13-why-enterprises-should-approach-procurement-of-hybrid-it-in-entirely-new-ways/feed/ 0
Who, if anyone, is in charge of multi-cloud business optimization? https://connect-community.org/2019-1-28-who-if-anyone-is-in-charge-of-multi-cloud-business-optimization/ https://connect-community.org/2019-1-28-who-if-anyone-is-in-charge-of-multi-cloud-business-optimization/#respond Mon, 28 Jan 2019 18:57:02 +0000 https://connect-community.org//2019-1-28-who-if-anyone-is-in-charge-of-multi-cloud-business-optimization/ Learn from an IT industry analyst about the forces reshaping the consumption of hybrid cloud services and why the model around procurement must be accompanied by an updated organizational approach. 

]]>


ISS-49_Multi-hued_clouds_over_the_Bering_Sea.jpg

The next BriefingsDirect composable cloud strategies interview explores how changes in business organization and culture demand a new approach to leadership over such functions as hybrid and multi-cloud procurement and optimization.

We’ll now hear from an IT industry analyst about the forces reshaping the consumption of hybrid cloud services and why the model around procurement must be accompanied by an updated organizational approach — perhaps even a new office or category of officer in the business category.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to help explore who — or what — should be in charge of spurring effective change in how companies acquire, use, and refine their new breeds of IT is John Abbott, Vice President of Infrastructure and Co-Founder of The 451 Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What has changed about the way that IT is being consumed in companies? Is there some gulf between how IT was acquired and the way it is being acquired now?

Abbott: I think there is, and it’s because of the rate of technology change. The whole cloud model is up over traditional IT and is being modeled in a way that we probably didn’t foresee just 10 years ago. So, CAPEX to OPEX, operational agility, complexity, and costs have all been big factors.


Abbott

Abbott

But now, it’s not just cloud, it’s multi-cloud as well. People are beginning to say, “We can’t rely on one cloud if we are responsible citizens and want to keep our IT up and running.” There may be other reasons for going to multi-cloud as well, such as cost and suitability for particular applications. So that’s added further complexity to the cloud model. 

Also, on-premises deployments continue to remain a critical function. You can’t just get rid of your existing infrastructure investments that you have made over many, many years. So, all of that has upended everything. The cloud model is basically simple, but it’s getting more complex to implement as we speak.

Gardner: Not surprisingly, costs have run away from organizations that haven’t been able to be on top of a complex mixture of IT infrastructure-as-a-service (IaaS)platform-as-a-service (PaaS), and software-as-a-service (SaaS). So, this is becoming an economic imperative. It seems to me that if you don’t control this, your runaway costs will start to control you.

Abbott: Yes. You need to look at the cloud models of consumption, because that really is the way of the future. Cloud models can significantly reduce cost, but only if you control it. Instant sizes, time slices, time increments, and things like that all have a huge effect on the total cost of cloud services.

Also, if you have multiple people in an organization ordering particular services from their credit cards, that gets out of control as well. So you have to gain control over your spending on cloud. And with services complexity — I think Amazon Web Services (AWS) alone has hundreds of price points — things are really hard to keep track of.

Gain New Insights Into

Managing the Next Wave

Of IT Disruption

Gardner: When we are thinking about who — or what — has the chops to know enough about the technology, understand the economic implications, be in a position to forecast cost, budget appropriately, and work with the powers that be who are in charge of enterprise financial functions — that’s not your typical IT director or administrator.

IT Admin role evolves in cloud 

Abbott: No. The new generation of generalist IT administrators – the people who grew up with virtualization— don’t necessarily look at the specifics of a storage platform, or compute platform, or a networking service. They look at it on a much higher level, and those virtualization admins are the ones I see as probably being the key to all of this.

But they need tools that can help them gain command of this. They need, effectively, a single pane of glass — or at least a single control point — for these multiple services, both on-premises and in the cloud. 

Also, as the data centers become more distributed, going toward the edge, that adds even further complexity. The admins will need new tools to do all of that, even if they don’t need to know the specifics of every platform.

Gardner: I have been interested and intrigued by what Hewlett Packard Enterprise (HPE) has been doing with such products as HPE OneSphere, which, to your point, provides more tools, visibility, automation, and composability around infrastructure, cloud, and multi-cloud.

But then, I wonder, who actually will best exploit these tools? Who is the target consumer, either as an individual or a group, in a large enterprise? Or is this person or group yet to be determined?

Abbott: I think they are evolving. There are skill shortages, obviously, for managing specialist equipment, and organizations can’t replace some of those older admin types. So, they are building up a new level of expertise that is more generalist. It’s those newer people coming up, who are used to the mobile world, who are used to consumer products a bit more, that we will see taking over.

We are going toward everything-as-a-service and cloud consumption models. People have greater expectations on what they can get out of a system as well. 

Also, you want the right resources to be applied to your application. The best, most cost-effective resources; it might be in the cloud, it might be a particular cloud service from AWS or from Microsoft Azure or from Google Cloud Platform, or it might be a specific in-house platform that you have. No one is likely to have of all that specific knowledge in the future, so it needs to be automated.

We are going toward everything-as-a-service and cloud consumption models. People have greater expectations on what they can get out of a system as well.

We are looking at the developers and the systems architects to pull that together with the help of new automation tools, management consoles, and control plans, such as HPE OneSphere and HPE OneView. That will pull it together so that the admin people don’t need to worry so much. A lot of it will be automated.

Gardner: Are we getting to a point where we will look for an outsourced approach to overall cloud operations, the new IT procurement function? Would a systems integrator, or even a vendor in a neutral position, be able to assert themselves on best making these decisions? What do you think comes next when it comes to companies that can’t quite pull this off by themselves?

People and AI partnership prowess

Abbott: The role of partners is very important. A lot of the vertically oriented systems integrators and value-added resellers, as we used to call them, with specific application expertise are probably the people in the best position.

We saw recently at HPE Discover the announced acquisition of BlueData, which allows you to configure in your infrastructure a particular pool for things like big data and analytics applications. And that’s sort of application-led. 

The experts in data analysis and in artificial intelligence (AI), the data scientists coming up, are the people that will drive this. And they need partners with expertise in vertical sectors to help them pull it together.

Gardner: In the past when there has been a skills vacuum, not only have we seen a systems integration or a professional services role step up, we have also seen technology try to rise to the occasion and solve complexity. 

Where do you think the concept of AIOps, or using AI and machine learning (ML) to help better identify IT inefficiencies, will fit in? Will it help make predictions or recommendations as to how you run your IT?

Gain New Insights Into

Managing the Next Wave

Of IT Disruption

Abbott: There is a huge potential there. I don’t think we have actually seen that really play out yet. But IT tools are in a great position to gather a huge amount of data from sensors and from usage data, logs, and everything like that and pull that together, see what the patterns are, and recommend and optimize for that in the future.


the-451-group-overview-1-728.jpg

I have seen some startups doing system tuning, for example. Experts who optimize the performance of a server usually have a particular area of expertise, and they can’t really go beyond that because it’s huge in itself. There are around 100 “knobs” on a server that you can tweak to up the speed. I think you can only do that in an automated fashion now. And we have seen some startups use AI modeling, for instance, to pull those things together. That will certainly be very important in the future.

Gardner: It seems to me a case of the cobbler’s children having no shoes. The IT department doesn’t seem to be on the forefront of using big data to solve their problems.

Abbott: I know. It’s really surprising because they are the people best able to do that. But we are seeing some AI coming together. Again, at the recent HPE Discover conference, HPE InfoSight made news as a tool that’s starting to do that analysis more. It came from the Nimble acquisition and began as a storage-specific product. Now it’s broadening out, and it seems they are going to be using it quite a lot in the future.

Gardner: Perhaps we have been looking for a new officer or office of leadership to solve multi-cloud IT complexity, but maybe it’s going to be a case of the machines running the machines.

Faith in future automation 

Abbott: A lot of automation will be happening in the future, but that takes trust. We have seen AI waves [of interest] over the years, of course, but the new wave of AI still has a trust issue. It takes a bit of faith for users to hand over control.

But as we have talked about, with multi-cloud, the edge, and things like microservices and containers — where you split up applications into smaller parts — all of that adds to the complexity and requires a higher level of automation that we haven’t really quite got to yet but are going toward.

Gardner: What recommendations can we conjure for enterprises today to start them on the right path? I’m thinking about the economics of IT consumption, perhaps getting more of a level playing field or a common denominator in terms of how one acquires an operating basis using different finance models. We have heard about the use of these plans by HPE, HPE GreenLake Flex Capacity, for example.

I wrote a research paper on essentials of edge-to-cloud and hybrid management. We recommend a proactive cloud strategy. Think out where to put your workloads and how to distribute them across different clouds.

What steps would you recommend that organizations take to at least get them on the path toward finding a better way to procure, run, and optimize their IT?

Abbott: I actually recently wrote a research paper for HPE on the eight essentials of edge-to-cloud and hybrid IT management. The first thing we recommended was a proactive cloud strategy. Think out your cloud strategy, of where to put your workloads and how to distribute them around to different clouds, if that’s what you think is necessary.

Then modernize your existing technology. Try and use automation tools on that traditional stuff and simplify it with hyperconverged and/or composable infrastructure so that you have more flexibility about your resources.

Make the internal stuff more like a cloud. Take out some of that complexity. It’s has to be quick to implement. You can’t spend six months doing this, or something like that.

Gain New Insights Into

Managing the Next Wave

Of IT Disruption

Some of these tools we are seeing, like HPE OneView and HPE OneSphere, for example, are a better bet than some of the traditional huge management frameworks that we used to struggle with.

Make sure it’s future-proof. You have to be able to use operating system and virtualization advances [like containers] that we are used to now, as well as public cloud and open APIs. This helps accelerate things that are coming into the systems infrastructure space.

Then strive for everything-as-a-service, so use cloud consumption models. You want analytics, as we said earlier, to help understand what’s going on and where you can best distribute workloads — from the cloud to the edge or on-premises, because it’s a hybrid world and that’s what we really need.

And then make sure you can control your spending and utilization of those services, because otherwise they will get out of control and you won’t save any money at all. Lastly, be ready to extend your control beyond the data center to the edge as things get more distributed. A lot of the computing will increasingly happen close to the edge.

Gardner: Micro data centers at the edge

Computing close to the edge

Abbott: Yes. That’s has to be something you start working on now. If you have software-defined infrastructure, that’s going to be easier to distribute than if you are still wedded to particular systems, as the old, traditional model was.

Gardner: We have talked about what companies should do. What about what they shouldn’t do? Do you just turn off the spigot and say no more cloud services until you get control?

It seems to me that that would stifle innovation, and developers would be particularly angry or put off by that. Is there a way of finding a balance between creative innovation that uses cloud services, but within the confines of an economic and governance model that provides oversight, cost controls, and security and risk controls?

Abbott: The best way is to use some of these new tools as bridging tools. So, with hybrid management tools, you can keep your existing mission-critical applications running and make sure that they aren’t disrupted. Then, gradually you can move over the bits that make sense onto the newer models of cloud and distributed edge.

Gain New Insights Into

Managing the Next Wave

Of IT Disruption

You don’t do it in one big bang. You don’t lift-and-shift from one to another, or react, as some people have, to reverse back from cloud if it has not worked out. It’s about keeping both worlds going in a controlled way. You must make sure you measure what you are doing, and you know what the consequences are, so it doesn’t get out of control.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

]]>
https://connect-community.org/2019-1-28-who-if-anyone-is-in-charge-of-multi-cloud-business-optimization/feed/ 0
A discussion with IT analyst Martin Hingley on the culmination of 30 years of IT management maturity https://connect-community.org/2019-1-23-a-discussion-with-it-analyst-martin-hingley-on-the-culmination-of-30-years-of-it-management-maturity/ https://connect-community.org/2019-1-23-a-discussion-with-it-analyst-martin-hingley-on-the-culmination-of-30-years-of-it-management-maturity/#respond Wed, 23 Jan 2019 21:31:03 +0000 https://connect-community.org//2019-1-23-a-discussion-with-it-analyst-martin-hingley-on-the-culmination-of-30-years-of-it-management-maturity/ A discussion on how new maturity in management over all facets of IT amounts to a culmination of 30 years of IT operations improvement and ushers in an era of comprehensive automation, orchestration, and AIOps.

]]>

The next BriefingsDirect hybrid IT strategies interview explores how new maturity in the management and composition of multiple facets of IT — from cloud to bare-metal, from serverless to legacy systems — amount to a culmination of 30 years of IT evolution.

We’ll hear now from an IT industry analyst about why – for perhaps the first time — we’re able to gain an uber-view over all of IT operations. And we’ll explore how increased automation over complexity such as hybrid and multicloud deployments sets the stage for artificial intelligence (AI) in IT operations, or AIOps.

It may mean finally mastering IT heterogeneity and giving businesses the means to truly manage how they govern and sustain all of their digital business assets.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to help us define the new state of total IT management is Martin Hingley, President and Market Analyst at ITCandor Limited, based in Oxford, UK. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Looking back at IT operations, it seems that we have added a lot of disparate and hard-to-manage systems – separately and in combination — over the past 30 years. Now, with infrastructure delivered as services and via hybrid deployment models, we might need to actually conquer the IT heterogeneity complexity beast – or at least master it, if not completely slay it.

Do you agree that we’re entering a new era in the evolution of IT operations and approaching the need to solve management comprehensively, over all of IT?


Hingley

Hingley

Hingley: I have been an IT industry analyst for 35 years, and it’s always been the same. Each generation of systems comes in and takes over from the last, which has always left operators with the problem of trying to manage the new with the old.

A big shift was the client/server model in the late 1980s and early 1990s, with the influx of PC servers and the wonderful joy of having all these new systems. The problem was that you couldn’t manage them under the same regime. And we have seen a continuous development of that problem over time.

It’s also a different problem depending on the size of organization. Small- to medium-sized (SMB) companies can at least get by with bundled systems that work fine and use Microsoft operating systems. But the larger organizations generate a huge mixture of resources.

Cloud hasn’t helped. Cloud is very different from your internal IT stuff — the way you program it, the way you develop applications. It has a wonderful cost proposition; at least initially. It has a scalability proposition. But now, of course, these companies have to deal with all of this [heterogeneity].

Now, it would be wonderful if we get to a place where we can look at all of these resources. A starting point is to think about things as a service catalog, at the center of your corporate apps. And people are beginning that as a theory, even if it doesn’t sit in everybody’s brain.

So, you start to be able to compose all of this stuff. I like what Hewlett Packard Enterprise (HPE) is doing [with composable infrastructure]. … We are now getting to the point where you can do it, if you are clever. Some people will, but it’s a difficult, complex subject.

Gardner: The idea of everything-as-a-service gives you the opportunity to bring in new tools. Because organizations are trying to transform themselves digitally — and the cloud has forced them to think about operations and development in tandem — they must identify the most efficient mix of cloud and on-premises deployments.

They also have to adjust to a lack of skills by automating and trying to boil out the complexity. So, as you say, it’s difficult.

But if 25 percent of companies master this, doesn’t that put them in a position of being dominant? Don’t they gain an advantage over the people who don’t?

Hingley: Yes, but my warning from history is this. With mainframes, we thought we had it all sorted out. We didn’t. We soon had client/server, and then mini-computers with those UNIX systems, all with their own virtualizations and all that wonderful stuff. You could isolate the data in one partition from application data from a different application. We had all of that, and then along comes the x86 server.

How to Remove Complexity
From Multi-cloud
And Hybrid IT

It’s an architectural issue rather than a technology issue. Now we have cloud, which is very different from the on-premises stuff. My warning is let’s not try and lock things down with technology. Let’s think about it as architecture. If we can do that, maybe we can accommodate neuromorphic and photonic and quantum computing within this regime in the future. Remember, the people who really thought they had it worked out in previous generations found out that they really hadn’t. Things moved on.

Gardner: And these technology and architectural transitions have occurred more frequently and accelerated in impact, right?

Beyond the cloud, IT is life

Hingley: I have been thinking about this quite a lot. It’s a weird thing to say, but I don’t think “cloud” is a good name anymore. I mean, if you are a software company, you’d be an idiot if you didn’t make your products available as a service.

Every company in the world uses the cloud at some level. Basically there is no longer choice about whether we use a cloud. All those companies that thought they didn’t, when people actually looked, found they were using the cloud a lot in different departments across the organization. So it’s a challenge, yet things constantly change.

If you look 20 years in the future, every single physical device we use will have some level of compute built into it. I don’t think people like you and I are going to be paid lots of money for talking about IT as if it were a separate issue. 

It is the world economy, it just is; so, it becomes about how well you manage everything together.

If you look 20 years in the future, every single physical device we use will have some level of compute built into it.  … It becomes the world economy. It becomes about how well you manage everything together.

As this evolves, there will be genuinely new things … to manage this. It is possible to manage your resources in a coherent way, and to sit over the top of the heterogeneous resources and to manage them.

Gardner: A tandem trend to composability is that more-and-more data becomes available. At the edge, smart homes, smart cities, and also smarter data centers. So, we’re talking about data from every device in the data center through the network to the end devices, and back again. We can even determine how the users consume the services better and better.

We have a plethora of IT ops data that we’re only starting to mine for improving how IT manages itself. And as we gain a better trail of all of that data, we can apply machine learning (ML) capabilities, to see the trends, optimize, and become more intelligent about automation. Perhaps we let the machines run the machines. At least that’s the vision.

Do you think that this data capability has pushed us to a new point of manageability? 

Data’s exploding, now what? 

Hingley: A jetliner flying across the Atlantic creates 5TB of data; each one. And how many fly across the Atlantic every day? Basically you need techniques to pick out the valuable bits of data, and you can’t do it with people. You have to use AI and ML.

The other side is, of course, that data can be dangerous
. We see with the European Union (EU) passing the General Data Protection Regulation (GDPR), saying it’s a citizens’ right within the EU to have privacy protected and data associated with them protected. So, we have all sorts of interesting things going on.

The data is exploding. People aren’t filtering it properly. And then we have potential things like autonomous cars, which are going to create massive amounts of data. Think about the security implications, somebody hacking into your system while you are doing 70 miles an hour on a motorway.

I always use the parable of the seeds. Remember that some seeds fall on fallow ground, some fall in the middle of the field. For me, data is like that. You need to work out which bits of it you need to use, you need to filter it in order to get some reasonable stuff out of it, and then you need to make sure that whatever you are doing is legal. I mean, it’s got to be fun.

How to Remove Complexity
From Multi-cloud
And Hybrid IT

Gardner: If businesses are tasked with this massive and growing data management problem, it seems to me they ought to get their IT house in order. That means across a vast heterogeneity of systems, deployments, and data types. That should happen in order to master the data equation for your lines of business applications and services.

How important is it then for AIOps — applying AI principles to the operations of your data centers – to emerge sooner rather than later?

You can handle the truth 

Hingley: You have to do it. If you look at GDPR or Sarbanes-Oxley before that, the challenge is that you need a single version of the truth. Lots of IT organizations don’t have a single version of the truth.

If they are subpoenaed to supply every email that it has the word “Monte Carlo” in it, they couldn’t do it. There are probably 25 copies of all the emails. There’s no way of organizing it. So data governance is hugely important, it’s not nice to have, it’s essential to have. Under new regulations coming, and it’s not just EU, GDPR is being adopted in lots of countries.

It’s essential to get your own house in order. And there’s so much data in your organization that you are going to have to use AI and ML to be able to manage it. And it has to go into IT Ops. I don’t think it’s a choice, I don’t think many people are there yet. I think it’s nonetheless a must do.

Gardner: We’ve heard recently from HPE about the concept of a Composable Cloud, and that includes elevating software-defined networking (SDN) to a manageability benefit. This helps create a common approach to the deployment of cloud, multi-cloud, and hybrid-cloud.

It’s essential that you get your house in order. And there’s so much data in your organization that you are going to have to use AI and ML to be able to manage it. And it has to go into IT Ops.

Is this the right direction to go? Should companies be thinking about a common denominator to help sort through the complexity and build a single, comprehensive approach to management of this vast heterogeneity?

Hingley: I like what HPE is doing, in particular the mixing of the different resources. You also have the HPE GreenLake model underneath, so you can pay for only what you use. By the way, I have been an analyst for 35 years, if every time the industry started talking about the need to move from CAPEX to OPEX had actually shifted, we would have been at 200 percent OPEX by now.

In the bad times, we move toward OPEX. In the good times, we secretly creep back toward CAPEX because it has financial advantages. You have to be able to mix all of these together, as HPE is doing.

Moreover, in terms of the architecture, the network fabric approach, the software-defined approach, the API connections, these are essential to move forward. You have to get beyond point products. I hope that HPE — and maybe couple of other vendors — will propose something that’s very useful and that helps people sort this new world out.

How to Remove Complexity
From Multi-cloud
And Hybrid IT

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

 

]]>
https://connect-community.org/2019-1-23-a-discussion-with-it-analyst-martin-hingley-on-the-culmination-of-30-years-of-it-management-maturity/feed/ 0