In this week’s episode, we have Brian Pawlowski, the Chief Development Officer at Quantum. Quantum helps organizations in harnessing the potential of their expanding unstructured data, offering an affordable solution for storing data for decades to come.
During our conversation, we delved into the shifting data environment, driven by the exponential growth of unstructured data and the transformative capabilities of AI, empowering organizations to derive meaningful insights from this data abundance. Furthermore, we investigated the optimal combination of cloud and on-premises storage solutions, taking into account specific use case needs, budget constraints, and security considerations.
Key Discussion Points:
What fundamental changes in data usage and characteristics distinguish unstructured data from the past?
Are the use cases of long-term data archiving, enterprise backup, surveillance and security all served by the same architecture?
How do government regulations on data storage and handling of PII impact Quantum in countries like China, and India?
You can find him on:
Website: https://www.quantum.com/
Linkedin: https://www.linkedin.com/in/brianpawlowski
Transcript.
Erik: Brian, thanks so much for joining us on the podcast today.
Brian: Hey. Thanks, Erik. It's great to be here.
Erik: Yeah, so this, I can tell you up front, is going to be a challenging one for me. I think what you guys are doing is very on the IT side of IoT, where I'm a little bit more comfortable on the application side. So I'm looking forward to you educating me and then the rest of the audience today.
Brian: Yeah, this should be a good conversation.
Erik: Let's start with a bit of background on you. You're the Chief Development Officer of Quantum. You've had some interesting roles before. You were CTO of DriveScale. You were VP of Pure Storage and the Chief Architect. You were SVP of NetApp. So you have a very deep technical background. What was it that led you through these roles to where you sit today at Quantum? What was it about the problem or the company that attracted you?
Brian: I would say two things attracted me to Quantum. One, the CEO, Jaime Lerner. I met him during COVID at one of the few restaurants that were open in Santa Clara at the time. It was a great Italian restaurant. He has a good taste in restaurants. I was a point in his favor. But as we sat at dinner talking about Quantum and his vision for what he wanted to do with the company and the product portfolio, pivoting around unstructured data and end-to-end solution for enabling people to basically derive business value from their data — from the edge to the cloud, or edge to the archive, depending on how you slice the problem — I found that vision compelling. I really did. I've been spending the past approaching three years, actually, basically helping Jamie implement that vision within Quantum.
I think the thing that struck me as Jamie was talking to me was the second part. If you look at the companies I worked for before, it doesn't take much to squint your eyes slightly and just basically note that they are single-product companies. NetApp has what they started calling a 'storage appliance' back in the early '90s. But it's now what they call Data ONTAP. It's the predominant portion of their revenues. Similarly, for Pure Storage, they have their Purity OS, and FlashArray, and FlashBlade system, which are essentially different implementations of the same concept for databases and unstructured data respectively. DriveScale was a startup company with one product. So I had spent so much time working on one product in one product companies. But at Quantum, it's a continual challenge for me. It's like keeping six plates spinning while I'm loading up the seventh plate. And I never did acrobatics like that, or juggling.
Erik: Not only multi-product but really multi-tech stack, right?
Brian: Yes.
Erik: You have a lot of specialized hardware, software. So it's really actually a quite complex portfolio.
Brian: Yeah, our specialized hardware really is focused on our — we call it automation or tape business. That is definitely built by us and custom with robots and real hardware. Not just fake hardware with CPUs and stuff like that. That's fake. Real hardware involves robots zooming back and forth between the racks and picking tapes and loading them into drives. That's hardware. That is definitely custom.
But I think for the rest of our portfolio, we actually make a concerted effort to use, as much as we can, high-volume commodity. I call them commodity. Dell would call them high volume of something else. They don't like the word commodity hardware. But we try to avoid anything boutique in the hardware and focus on the software value-add. That's where our differentiation is.
Erik: Got you. Well, let's take a step back before we get too much into your portfolio and talk about this topic of unstructured data. Because companies have been managing data-related challenges for a long time. I guess there is the question of, what is different now that requires a new set of solutions, new architecture? I think the key here is that word unstructured that companies are dealing with, with different volumes and maybe using it.
Can you walk us through a little bit of your thought process? What is different now in terms of the type of data people are using, and how they're using it? And why does that require a new set of solutions or a different solution than maybe the historical data management tools?
Brian: Do keep an eye out for that diagram I sent you, the picture, because it would help this bit of the conversation. So a short history lesson. I spent the early part of my career in storage. I've been in storage for a very long time.
A long time ago, a lot of the spend in IT revolved around storage. It revolved around basically the care and feeding of database systems for essentially all of the business aspects of a company. Anything else like a long, long time ago, email and stuff like that was just so off the radar screen for IT in terms of investment and management, that it just didn't factor into their thinking. It was all about the business applications. A lot of that revolved around database, which is where the term came for structured data.
When people say structured data versus unstructured data, the simplest way to look at this is, structured data is database, where it has a specific format and specific applications that access it and has very few of them. The data footprint is rather small, actually. Unstructured data is literally everything else. That would include today — well, going forward a little bit — email, hitting the web, web pages, music. I'm actually looking at my diagram here, my cartoon. Music, video, which is all digital now. That's really important. I'll get back to that. Scientific data, either be it telemetry data from satellites or space missions, of which we keep sending more up there in figuring out what rockets to put them on, or geophysical data for oil and gas exploration. And in very much so, in the bio space, bio pharmacy, is genetics discovery, drug discovery, and mapping genomes. I just went in a sweep through history to where we are today.
The thing about all the things I just named is that not one of them is going into a database. They are text files. They're uncompressed or compressed video files that you sometimes stream when you want to catch the latest movie. They're the MP3 or audios that you're listening to on Spotify. Then things like self-driving cars. I have two Tesla's. We'll go back to that if you want to later on. The amount of data that those cars are collecting and using to develop auto self-driving capabilities is enormous. They all have different formats. They are not putting databases. They're often text files or variations on a JSON files or these binary files that are specific formats for all the codecs for video or for audio files.
The funny thing here is that 1990, everybody's worried about how do they keep their database up and running, and how do they get the performance out of it? I would say nobody cares, really. I would say the majority of the spend, they do care. But the majority of the spend on IT right now is essentially managing the deluge of unstructured data that drives their business every day, coming from so many sources. It's a fantastic complexity. Does that make sense?
Erik: It makes sense. Absolutely. There's maybe another angle that I tend to grapple with more in my day job. Because I'm not usually on the back end helping companies figure out how to manage that data effectively. Where I'm often is helping companies figure out what is the value of that data. It feels, at least from my perspective, that with unstructured data, there's a lot more uncertainty around what is the actual value of this data than with structured data.
Usually, with structured data, it's procured for a purpose. It fits into a database. Maybe you still discovered new uses for it. But it seems that, often, it's fairly transparent what are we using this data for. With unstructured, I feel like I have a lot of clients who are basically saying we've got all this data. We think there's a tremendous amount of value in these different areas, and we're not quite sure what the actual value is. We feel like we could be 5x more efficient in R&D potentially based on this amount of data. We're not really using it right now. We're not quite sure how to use it. But we just know that there's a lot of know-how locked in that. So it feels like there's also on the business model side or the solution side, there's also a lot more uncertainty and maybe a lot of hidden value in those datasets.
Brian: Yeah, absolutely. I mean, the elephant I didn't point out that's all around us while we're having this conversation is AI and machine learning, and general AI and everything that's going on today. All this unstructured data was coming at us from many different sources.
In my house, I would call myself almost a Luddite personally in terms of the smart home kind of thing. And yet, I have my light switches. I'm communicating with a little hub and my mesh Wi-Fi all around the place. That's essentially using Wi-Fi signal disruption to determine if there's any motion. So it's acting as a security system within my house, the Plume pods. It uses all of the Wi-Fi connected devices and signal disruptions to determine motion within the house.
Erik: Sorry. I've got to interrupt you for a minute. I just had the podcast yesterday with a company called Origin AI. I don't know if it's the technology behind that, but that's exactly what they do. It's interesting that you brought that up. It's a fascinating technology.
Brian: And you know what this is? It's basically taking lots of data and applying algorithms and transformations on them and some AI-based intelligence to extract patterns and information. When you step back at it, it's so freaking cool. It's just that, go back to 1990. This is the thing. Nobody was thinking about this. If there were somebody thinking about this, I would like to see what stocks they bought in the past 30 years. Because it's simultaneously exciting and increasingly challenging, not getting simpler.
By the way, the elephant AI in the room and the machine learning, what people discovered was a lot of this data they were collecting and didn't know what to do with, except maybe delete, was that they can basically feed machine learning algorithms to develop methods of business analytics that gets higher fidelity as you feed it more data. And so it's created a vicious cycle or a virtuous cycle, if you want to take the positive side of it. The more data you have, the more data you feed into your AI machine learning algorithms, the more accurate and useful it becomes. It gives you the idea for doing more types of applications that are essentially learning applications day-to-day to day-to-day-to-day-to-day data, which by the way, has led everybody to approach the same thing. I had with my garage, don't throw out anything. Keep it. Because you never know when you might need it. So just keep everything forever.
Erik: That's right. That's the IT vendor's dream. But to some extent, it is the thought process now. I guess maybe your valuation doubled after OpenAI launched. Because now companies are looking at all of these unstructured data that's been sitting around, that has been largely useless in different files or maybe used occasionally by somebody every four years. All of a sudden, we have tools that have the promise of tapping into that. Of course, then, we have to bring that on-premise, right? Because companies, at least with their own datasets, they're not going to upload everything even to a trusted partner like Microsoft. They're going to want to do a lot of this work on-prem.
Let's get in then a little bit to how you help companies to solve the problem. I guess there's a lot of different layers in data management problem. Where do you fit into the architecture?
Brian: I'll use two examples to just get us into the breadth of the Quantum products and how we're trying to manage that. Then we'll talk about how we're trying to manage that story. One thing we have is, we have a product called ActiveScale. It's a S3-compatible object storage. S3 is the object storage APIs that Amazon uses, and it is the bulk of data being stored on Amazon. It's accessed via the S3 APIs. I forgot what S3 stands for. Simple storage system or something like that. Anyway, it's the object storage standard.
We have a product called ActiveScale. There are other object storage systems out there. Everything settled on S3 has the data exchange format to what they call put and get objects. We introduced, about a year and a half ago, a version of the product called ActiveScale Cold Storage, which essentially merges our large tape robot libraries. We're talking thousands of tapes with multiple tens of tape drives and robots that load and unload the tape drives with cartridges, and basically maintain a catalogue of what all the data is and where it is.
We put that behind our ActiveScale system, and basically provide, not to be too rude, a cheap and deep S3-based object archive that allows you to store data for a long time, and be able to fetch it using industry standard APIs with the S3 object interface. So you don't ever see tapes. You don't ever think about tapes. You don't ever think about anything, anything other than an object storage interface, the same way you would write an application for Amazon storage access. That's one thing we have here. The important part of that is cheap and deep.
Erik: For that one in particular, can you help us understand maybe two aspects? What does the cost structure look like for that solution compared to maybe a traditional cloud solution? Then what is the latency for accessing data through this, given that there's this physical component of collecting the hardware?
Brian: I'm going to wave my hands a bit here, and then maybe I can follow up. When I think of tape, I think one tenth the cost of disk-based storage. That comes primarily from the media cost. It's dollars per terabyte for the media. The tape versus spinning disk is lower. But also, the total cost in terms of density of footprint, real estate in data center, and power consumption is a fraction of keeping disks spinning, basically, to get access to the data. Because the tape is sitting in their little cubby holes for all intensive purposes, it's sitting on a shelf drawing no power whatsoever. That is absolutely important to basically slamming down the cost.
The other thing about tape versus disk is, tape has been shown to be a reliable archive method for storing data for 5 to 10 or more years under normal storage considerations. Disk, when you think about them, you're looking at 5-year warranties, maximum 7 and a certain population drop out over time. Tape has been a long-term storage technology for things like backups forever. It's well-understood, well-developed, and well-managed technology.
What we've done is we brought them into the 21st century by putting S3 in front of it. This is essentially what Amazon did with their Glacier Storage tier, which is basically the way to store data there. In access times, I would say you're looking at minutes for accessing a large file. Sometimes you can put together policies to make it shorter like minute or sub minute. But your trade off there is you can store lots of data, and then you have an access delay. It's not what I would call, it's a new nearline storage.
They used to do this with cheap disks, slower disks and stored off site. You'd be accessing data. It's not online as much as your primary storage, which is now dominated by solid-state flash. It's slow spinning disks and lots of them. It takes a while to access that data. Tape is replacing all of those nearline applications with an even slower and cheaper solution. So that's the trade off where you put the data to save money.
Erik: Yeah, it makes sense. So if you're doing something like machine learning, you need to access this to training. Then having a maintenance, no problem. You're not going to be running real-time applications on it. But then, anyways, you don't need usually huge amounts of data for that.
If I look at your use cases, you have four that are highlighted here. Two of them: long-term data archiving and enterprise backup seem like very straightforward applications of the technology you've just explained. Then talk about surveillance and security and ransomware recovery. Are they using the same architecture, or is there different architectures for those solutions? What are the other? Because you do have a diverse portfolio.
Brian: I would say we have three products that are in this space for the backup long-term archiving and ransomware protection and data recovery. Obviously, we have the traditional tape libraries. Those tape libraries, we sell in traditional backup applications. We partner with the major backup software vendors that essentially front end our tape and provide data storage and cataloging off of your primary storage systems, and put it onto tape and manage the tape robots transparently for you. You basically interact with our product, our tape product, using a backup application.
Typically, when people are doing backup application deployment, they're often looking at a single backup solution across multiple storage deployments they had within their data center. That is the common coin of essentially giving a single interface to all of the backup data. Tape applies for traditional backup deployments.
We also have a DXi product line. We sometimes call it deduplicating appliance. I disliked that term. What it is, is a highly-efficient online disk-based or flash-based backup appliance. It can integrate with tape and use tape as essentially a place to offload older data that you don't think you're going to access again. This is all policy-based. But that backup appliance looks like a tape system, except it's a virtual tape library. Let's just call it that for a second.
The thing about that is, it basically is very — it can crack the data and backup streams and understands. It pokes around the data and essentially deduplicates and compresses the data. The thing about backup tapes, think about it, think about your laptop. You back it up today. Your backup software is going: what files changed since yesterday? And if you do a full backup, you say, "Well, I want a complete image of my system anyway on a weekly basis, because I just want to have a recovery point that gives me a fast restore of the whole system. I don't want to go through incrementals and stuff." Anyway, those backup solutions and backup application do lots of duplicate data. Because a lot of data doesn't change. I mean, this is the whole hot-cold data thing, right? The hot data is the stuff you've been working on in the past week, and everything just sort of ages in terms of access. It just becomes cold. People randomly talk about 80% to 95% of your data that's cold, and 5% to 20% of your data that's hot.
One AI machine learning application we've been talking to, they're changing the way they're going to store their architecture. They want to move to an on-prem S3-based object storage architecture for all of their data access are flagging. They think that 30% of their data is hot. 70%, 66%, 33%, 70/30, that 66% of the data is cold, and they want to store that on tape. Basically, we're working through with them on how to automate this and detect the data that's not being accessed, and automatically tear it off to lower-cost storage while keeping the data that they need for their AI application near at hand on a disk-based system. Actually, what they're interested in is flash. They're ejecting all spinning disk systems out of their data center and focusing on flash and tape.
Erik: Got you. Okay. So the magic is really in the software in terms of doing this analysis of which data to duplicate.
Brian: Yeah, so there's the tape system, this traditional tape. There's the DXi Virtual Tape Library appliance that has extraordinary storage efficiency, data reduction capabilities that works well in backup scenarios. Then I've mentioned our ActiveScale Object Storage system, especially with the tape back end, provides an object storage-based solution for your archiving or lower tier of storage. We're introducing, and we're getting close to shipping the version 1.0 of our Myriad all-flash array. That's designed as a primary storage solution for applications like AI, video effects rendering, and data analytics. Essentially, things related to AI. That has to be fast, because these applications need to be fed. You're feeding GPUs at a high speed for doing processing on data. GPUs oddly called graphic processing units. I think most people don't care about the G anymore.
Then there's ActiveScale Object Storage on disk and flash-based system. It's being used for a middle tier, medium access speeds storage. Then there's ActiveScale Object Storage with tape back end or traditional tape systems for the lowest cost tier for long-term storage of data, that you don't need to be immediately online for immediate access. And so people are looking at that.
The thing that we are cranking our innovation beans on is, essentially, we have the ability to specify policies to automatically move the data between the tiers. Everybody is saving all their data. You can look up these charts. IDC has the unstructured data growth exponential curve for the foreseeable future. The amount of data that people are storing is impossible to manage manually and to move. You couldn't hire enough people to actually manage it. It has to be automated.
Those automation techniques for moving data between the tiers and keeping the really important data close to where they need to be to the processing, in the more expensive primary storage. I think of flash is 10 times more expensive than disk, which is 10 times more expensive than tape. I just use those numbers to ground myself. You can't do it manually, so you have to have these policy-based things based on last time access of files, age of files, type of files. So this is where it's a lot of metadata analysis on the files to do intelligent auto-tiering of data. That's where people are interested in talking to us. Because we have multiple entry points into their data workflow, and we can automate the management and movement of that data to match the way their business runs and how they use their data.
Erik: Now, there's a couple other policy topics that I'm all too keenly aware of, at least here in China. One of them is government regulations around where data can be stored. So cross-border flow. I think this is not only an issue for China. It's an issue for India. It's an issue for other countries. Then you can see the regulations around what data can be stored when it applies to individual PII. Are these topics that you touch, or are these usually handled somewhere else in the system?
Brian: Yeah, we absolutely play in this space, especially things like DXi and ActiveScale which are meant for longer term data retention. When you think of backups, you basically intersect very quickly with compliance in terms of the HIPAA healthcare regulations, data retention policies for healthcare, data retention policies.
Somebody told me in Europe that they need to save my healthcare data for 30 years past my death. I'm like, well, that's not going to help me. But maybe it'll help someone else. So the policies that are driving data retention require specific support in the products that essentially have immutable data. You can't go around and delete the data. That's the health care and compliance stuff.
Same thing in legal. Documentation, a lot of financial records — financial records, business records, personnel data, these have to be retained for like seven years. This is just a requirement. So you have to make immutable copies of them. These are, essentially, we have capabilities both in ActiveScale or object storage to make immutable objects. DXi has immutable backups that can't be deleted.
For the object storage system, I talked to Thomas Demoor, one of our architects. He leads the ActiveScale team now. I kept beating him into submission. How can I access the data or get rid of it? It turned out I would have to disassemble the system and crush the disks. The software basically wouldn't allow you. You just basically had to go to the hardware level and start really crunching it. The data is stored encrypted, so you can't actually make sense of it without the ActiveScale software. The ActiveScale software won't let you delete it because you put an object lock on it with a retention time, which you set per use case. So you could destroy the data by physically destroying the machine, but you can't delete the data. It was an interesting conversation, because it was frustrating to me. Because I was looking for a hole. So that's kind of important.
You did mention, I think, data sovereignty, about having data in certain places. We do a lot in the media and entertainment spaces. It's a big traditional market for StorNext. We have this file system, the StorNext File System, which is used by every animation, movie production house, sports teams. By the way, a lot of the video production was coming from both the sports teams themselves, regardless of what sport you want to play. It varies country to country, obviously. Also, things like in the U.S., it would be like the Major League Baseball as the overall network. Actually, ESPN. ESPN is the sports network and stuff. They are saving everything. Our StorNext product is for fast video editing and post-production. It's used in all these places. The thing about that is that they have this extreme sensitivity. I've been somewhat surprised. They don't want to store their data in the cloud. They really want to keep it on-prem. They are extremely paranoid about data leakage; movies being posted online before they even hit the theater. It's the secrecy around all of this. Because it all involves how much money they can make off of a marquee debut of a film, say. It's just important.
It's not necessarily the cloud. It's the getting the data to and from the cloud. There's too many points where it could be interrupted or hijacked, right? So a lot of these companies are looking to what we call repatriate data to on-prem for things like intellectual property and sensitive data. So it's not only like government data. It's commercial. The commercial data is driving a lot of the activities we have in terms of the product development and feature development for on-prem deployment of things, that they're now bringing back from the cloud partly out of security concerns.
Erik: Yeah, it makes sense. We encountered that a lot when we say yeah to manufacturers, for example. You just get the data out of your plants and get it into the cloud. There's so much more you can do. You can look at data sets across factories and all this. But in the end, you're tapping into the nightmares of the executive management team, of, yeah, okay, maybe there's something interesting here. But what if our competitors get access to our processed data? Then this what if is a significant risk factor that's still a top of mind.
You're covering a lot of industries. I mean, this is just a problem that every industry has. I guess some of these industries that you're working with are very sophisticated, if you're working with maybe a large media company or a financial institution. Sophisticated in terms of their IT capabilities. Are you typically working with them directly and then helping them to deploy their solutions on-prem? Are your solutions on-prem at their facilities? Or are you often working with some sophisticated intermediary who's managing the scoping of the project? What does that look like?
Brian: The simple answer is both. I would say that one thing that we do a lot of work with a partner is, typically, when we're involved in government bids, contracts, and sales, we are often working in the respective countries with a specific vendor that is qualified and approved by a particular government. This is not only the U.S. but Europe, and I suspect in China too. That very much is a very strong partnership and collaboration to sell into very specific markets with very specific requirements.
The bigger companies we work with — we work with directly major movie houses and things — we basically provide them with access to our development team. We have roadmaps that are driven by their emerging requirements for things they want to do differently in the future. But we also have partners. We have very specific partners in media and entertainment. Actually, we have specific partners. We have this emerging practice I was going to mention before around what we would call AI-enriched video data. We have this product called CatDV.
CatDV is a media asset manager. What does that mean? It basically understands video formats and still images. It can break them apart. It essentially allows you to edit, annotate, and transform the videos. This product has been around for 10 years or with the company about when I started. Actually, December 14, 2020 was the day, the deal closed. It basically allows collaboration, annotation, and transformation of video so that you can essentially do things like add captioning, flag parts of the video, identify people in the videos or objects in the videos.
10 years ago, this is all manual. When you watch a video and you see captioning and subtitling, that all used to be manually done. You know X-Ray on Amazon? I've never used Amazon at all. You could do X-Ray on Amazon, and it'll list the characters on screen in the cast for you. You could click on the character and see what the background is. All this is rapidly moving to simply AI-driven. So we have basically been innovating around CatDV to integrate it with a lot of available AI packages, to do automatic language translation for basically re-dubbing the soundtrack without human intervention, captioning or subtitling in different languages after translation, person identification for essentially which actors, and using a database that's area specific for, say, a certain country. You'll basically train it with all of the actors that you have in that country. Essentially, run it over the video, and it'll flag who's in there and what scene, and tag the thing. So that is all automated.
The accuracy of these tools are approaching 90% to 95%, which, by the way, is sometimes better given what I see at subtitles for languages that I'm remotely familiar with. It's often better than what was done by hand before. The amount of data that you ingest to run the AI programs that you've trained with our equipment — you train the models on facial recognition, and then you run a model over your video. Essentially, come out with an enriched video result. Then you pump it out, and you produce it. That's your production of your released video for a new market.
Yeah, Erik, this is funny. The cost of manually doing it before was prohibitive, such that only certain videos that was considered to have enough of market, to generate enough revenue, were actually — they did techniques like this. There's a lot of movies that you can never get that essentially has English dubbing or English captioning, because it's just not worth it. There's not a big enough market for English. I'm a foreign film fanatic, by the way. It's somewhat frustrating. I sometimes watch foreign films without understanding exactly what's going on. But AI just basically removes all the barriers for it. We are talking to studios, especially in Asia, where there are so many languages that they want to basically release their archives for new markets across Asia. But they are struggling with having to do 10 language translations or subtitling. That would be cost prohibitive, except for things that they knew would be winners. Now they're going through their archives and looking to re-monetize stuff that hasn't been touched in 10 years. The amount of data we're talking about is flabbergasting. Okay?
Erik: Right. I guess the process has been heavily manual. And because of that, you're only serving 10% of the population with it. Again, I'm sitting here in China. Maybe this is one of the industries which is best not to be state-run. But the reality is, very little foreign media actually gets dubbed into Chinese. Constantly, I just have to practice. I wish I had a nice library of movies or French movies or whatever that are probably dubbed in Chinese. It just doesn't happen yet. I guess, if we look forward 5 or 10 years, maybe sooner, that'll figure out how to do a decent job. Also, the voice with AI, I think it's just a matter of time.
I think this is a good transition to my last question, because I now have a pretty good understanding of your business today. So this is certainly a really moving market. What is exciting for you about the future? If you look forward 1, 2, 3 years, what do you see happening either at Quantum or in the landscape around you that's keeping you motivated?
Brian: You know, I have never been more involved with AI and machine learning applied to data from a storage perspective before I came to Quantum. The company mission to tackle the unstructured data explosion problem and to derive business value from it is pushing us into how do we enable AI and ML techniques, how do we integrate our storage products with the upper application layers, and then using our CatDV program to essentially help customers make sense out of their data and derive business value while lowering the cost of getting that business value generation over generation, year over year?
The thing about AI/ML is, to me, it's not that it's actually providing captioning and translation and stuff like that. It's that the cost of doing it versus a person doing it is so low, that it basically blows open the number of use cases and media and data that you would apply it to. So it's a great democratizer in a lot of ways. Around that, the amount of data that people are storing, you basically have to keep lowering the cost of data while providing the performance needed for critical applications at critical points in the workflow to the data that you want to do that with. Everything else needs to be minimized in cost.
It's a difficult problem, and it's keeping us on our toes. As we try to look at how the portfolio fits together, how the data moves, how do we simplify the customer's life, how do we automate as much as possible? Then walk hand-in-hand with them into an AI/ML future or AI-enabled future. That's basically touching every part of their data workflow at this point.
Erik: Great. Well, it's a great problem to be focused on right now. Whether you're a digital native company or you're a company with a 200-year legacy, managing data is core to your success today, right? So it's a problem everybody's struggling with.
Brian: Absolutely.
Erik: Great. Well, Brian, I really appreciate you taking time to speak with us today.
Brian: Yeah, thank you, Erik.
Erik: Good. Thanks, Brian.
Brian: Good. Take care.