Podcasts > Ep. 156 - How do you manage petaflops of IoT data
Ep. 156
How do you manage petaflops of IoT data
Matan Libis, VP of Product, SQream
Monday, December 19, 2022

In this episode, we interviewed Matan Libis, VP of Product at SQream. SQream provides an analytics platform that minimizes Total Time to Insight (TTTI) for time-sensitive data, on-prem and on-the-cloud.

In this talk, we discussed the value of modern database architecture for extracting insights from petaflops of data. We also explored the merger of traditional data warehouses with data lakes into lakehouses wherein large volumes of data are queried without duplicating to a warehouse.

Key Questions:

  • What are the unique architectural requirements for managing peta-scale data sets?  
  • How do data management requirement differ for the manufacturing, telco, and banking, industries?
  • What is the difference between a data lake, a warehouse, and a lakehouse?

Transcript.

Erik: Matan, thanks for joining us on the podcast today.

Matan: Thank you for having me.

Erik: All right. I'm really looking forward to you to guide me here. This is a bit of a technical, a more technical topic than I typically host on the podcast. Often, we're a bit more focused on end-user applications. Here, we're really going to be talking about the fundamental infrastructure of how data is managed. So, I'm looking forward to getting out of my depth.

Matan: Yeah, we'll try to make it interesting.

Erik: I would love to start by learning a bit more about yourself. I mean, you're not the oldest person we've had on the podcast, but you've managed to fill up the past 10, 15 years with a lot of experience, including — I don't know if you were a founder but, at least, you were eventually a CEO of a company that was acquired just a couple of years before joining SQream. Can you just touch on a few of the highlights, and what is it that led you today to the position where you are as VP of Product with SQream?

Matan: All right. So, we'll start from the end. I'm Matan. I'm the VP of Product at SQream for almost eight months, I think, which actually feels like a lot more because we were super busy. Before that, I came from a completely different space focusing on AR, augmented reality, which is super hyped these days. But we were focusing more on the technology side, on 3D and computer vision technologies. I had even, as you mentioned, the chance to be a CEO of an Israeli startup that was developing from the early stages the infrastructure for augmented reality and, by the way, IoT use cases at the time from manufacturing floors.

In 2019, I was happy to lead the company to a successful M&A for a US-based company. Then for two years, I worked under that company running, at first, the Israeli office. But at the end, I was running the whole R&D team. So, I had the chance to play around different roles in my career. Before that, I actually had my own startup in the field of education. I had the great opportunity to bring it to the US market. I actually moved to the US market for a few months. It was in the educational space. So, I was going from campuses across the US trying to bring it in. I had to learn a lot from that time. So, this is the background.

You asked about what led me into joining SQream. First of all, I was intrigued by the challenges that the company was facing, is still facing. I'm sure we're going to talk about a little bit further down the line. Moving from on-prem to cloud, and taking the technology stack into a new area is super challenging and super interesting. For me, personally, I love this kind of challenges. Also, I didn't want to get stuck in the same space. I was, for a lot of years, in augmented reality. I wanted to improve myself and the rest of the world, that I can deal with other technologies, other spaces I can learn more. I'm so happy I made this change. I have a wonderful team here in the product team at SQream. Going to the office every day is super fun, and I'm learning so much. I'm happy I have a stage or a place to share what I was able to learn in the past eight months.

Erik: Awesome. Well, let's get into the topic of what technology you're dealing with. For a certain portion of the audience, probably the easiest way to describe SQream is that it's a competitor of Snowflake. So, that will intuitively make sense to some people in the audience. The headline on your website is fastest time to insight for data at any scale. It's a database infrastructure for converting raw data into insights. That's another way that we can think about it. How would you describe the field that you're in?

Matan: I will try to take a little bit less technical approach, and then we can jump into the more technical. SQream actually allows companies to get answers from its data and get business insights, as you mentioned, on where it was unattainable. Because we will shine where other players can't deal with the amount of data. I'm talking about, obviously, SaaS. This is what we sell. We sell performance. This is always the key thing here. But not just that. We're allowing our customers to deal with their data. We usually shine above one data. This is very unique to us. We're actually utilizing the GPU. Later on, we can talk about a little bit about the tech stack. We're utilizing the GPU in order to get faster insights on more data. This is what we bring to the table.

Erik: Yeah, we're definitely going to have to dig into the tech stack and why you're able to differentiate, and these use cases around massive data. But maybe we can just cover them first from a use case perspective. So, what would be a handful of scenarios where a traditional database or streaming database would not serve?

Matan: First of all, we're talking about the amount of data you would like to get insight from. One of our main use cases for our top customer, we're talking about they manage on SQream 10 petabytes of raw data. Those numbers are crazy. We're talking about 100 terabytes ingested into SQream on a daily basis, more than 100 data engineers that are asking, more than 9,500 queries per hour, those numbers. When we're talking about scale, this is scale. This is more than just buying more storage. The ability to actually get the insights and be able to do very complex queries and complex joins, this is where we feel comfortable. This is why most of our customers are enterprise customers. Because, usually, we are a better fit for a company that has a lot of data. Again, we're talking about more than one data. This is where we usually feel comfortable.

Erik: Okay. What would be scenarios where this is required? Are we talking about managing aircraft, data coming out of the aircraft engines? Are we talking about just very large networks of asset fleets for vehicles or manufacturing plants? What would be the scenarios where you would be generating 10 petaflops?

Matan: Let's talk about two use cases. The first one, let's go from the manufacturing space, a very IoT use case. On the manufacturing floor, all companies would like to be able — their dream for a manufacturer is to use all the data they are collecting to predict machine failure or equipment malfunctions before it occurs. This is the simple meat. Everybody talks about IoT and how much sensors are collecting all the data all the time. But I feel like not enough people are talking about, "All right. How are we going to be able to process the data? How are we going to be able to get insights?"

So, in that use case, for the manufacturer, they have on the production line, on the floor, a lot of sensors. They're trying to detect a lot of stuff through that process. Then they're trying, as I mentioned, to process the data. They're building machine-learning modules on top of SQream in order to get, again, machine failure or equipment malfunction predictions. This is very common. We see it in a lot of manufacturers, but we also see it in telco. For example, a lot of network malfunctions that they would like to be able to predict. So, this is one very IoT use case. By the way, this is one of our most common use cases. Our big customers, this is what they are chasing after. We see them expanding and taking it into more and more factories and expand this use case to more and more production lines, because they get a lot of value out of it. We can all understand it. For big manufacturers to be able to predict mistakes or production line problems, it's worth a lot of money for them. So, I'm happy we are able to bring this value to our customers.

In terms of the second use case, let's talk about telco. We have customers that they want to collect all the data they can on their users, on their customers — how long they were spending on the phone, or how many texts they've sent, all this kind of user data. Before SQream, they were able to look at six months. They have analysis and query six months of data. With us, they can do two years.

Usually, coming to family dinners, people ask me, "All right. What do you do?" I'm like, "I'm working in this company called SQream with a Q," which is already weird. They're trying to understand. We are relational database. It doesn't make any easier. Then I always present this use case. We're allowing companies to ask bigger questions, anywhere they want to do it. If they were able to analyze only six months of data, now they can do more. This is simply the value we bring.

Erik: Okay. Well, I think that's a great one to dive into and to understand a little bit more. Because from my perspective, I'd say, you got six months of data. You've got two years of data. You've got 4x as much data. It feels like that shouldn't be such a huge jump. Or if you want to do four times as much data, maybe you just reduce the number of variables somehow. But it sounds like that was a step change in terms of functionality. Why is there a step change? Why is there a barrier where another database would not be able to manage these two years of data just by adding maybe more processing into the network?

Matan: So, you're getting more. The fact it's all about in memory and more stuff like that, the fact where you're collecting this data, and the fact that SQream can separate between the storage to compute, and can scale them both where in an independent one and another actually allowing that.

Because in storage, you would think you can buy more and more and more storage. But it doesn't mean you're going to be able to manage that storage. In a matter, you're going to be able to get insights to your questions or for your queries. This is where the change actually begins. Our value proposition doesn't gain only on the amount of data we can process. It's also about, as I mentioned, the opportunity to scale the storage and compute independently. We are able to support complex query like multiple joins, storing and aggregation.

Obviously, at the end, everybody talks about cost performance. That's not in there just to get the best performance. You want to get them in a cost that makes sense. This is always the game we play. I'll go to market. Again, I mentioned enterprise. It always starts with a POC. Our customers always want to try it themselves. We're always starting with a POC, where they actually start testing it on themselves. They can see the value they can get from it.

Erik: Yeah, you have a few performance metrics on your website comparing SQream to Snowflake and to Redshift. The numbers you have there are, roughly, I'd say an 8x to 10x improvement in terms of time from ingestion and query time. Then also something like an 8x, 9x improvement in terms of total cost. How would those be calculated? What would be the variables that you would be measuring in order to come up with these benchmarks?

Matan: As I mentioned, total time to insight start with ingestion of the data if we're talking about traditional warehouse. After, we're going to talk about the future. We'll see how we're going to solve that as well. But in fact, first of all, the ingestion time to the warehouse, and then the query. After you already got the data in, now you want to query it. So, it's the time it takes you to get the answer to your question, the answer for the query, and the cost that was attached to it. We're talking about storage. We're talking about cloud compute, and everything that got involved with that.

So, this is how we measure the total time to insights, the benchmark compared to others. We're taking the same data sets, the same queries. We're doing the same process. We ingest the same data, and then we try to query it. We see how much time it took and how much it cost us. One of another value propositions is the fact that because we are utilizing the GPUs, we need less physical hardware, which actually leads to smaller carbon footprint. This is, I know, people are not always talking about it. But this is very big for a type of customer. If they don't have enough room to place in all the hardware they need to, using us they need less space in order to put on their compute, and storage.

Erik: That was an interesting point. So, you're running on GPUs as opposed to CPUs. Is that the case?

Matan: First of all, we are utilizing the GPU. It doesn't mean we don't use CPUs. We just make sure the right calculation happening on the GPU. By the way, the reason why GPU, GPU is optimized for executing multiple simple operations and tasks at the same time. This is actually how big data analytics work, storing tables, aggregation. As I mentioned, not only GPU, but we are offloading the most of the complicated tasks in order to get the best performance.

Erik: Got you. Now, are you running your own databases, or are you always built on Azure, or AWS, or another provider?

Matan: We always run our own database, but we have partnerships with vendors if we're talking about on-prem. Until not so long ago, we're a very traditional on-prem data warehouse. As I mentioned, we have separated from storage. So, we have a lot of partners like Hitachi and Weka that are actually providing the storage to our customers.

If we're talking about cloud deployments, yes, we are what we call the multi-cloud. We can do all the big cloud providers — GCP, AWS — and even just launched a partnership with OCI from Oracle. So, it really depends on the use case and the customer. We try to be as agnostic as possible.

Erik: Okay. Got it. You can see, I'm just going to keep peppering you with these ignorant questions.

Matan: That's okay.

Erik: That's my role here. My goal is, at the end, that I understand this. Then, hopefully, our listeners also will. If I'm thinking about the architecture, we have SQream for data analytics. You run your own database. Then you have all of these other components. You have storage, maybe cloud storage through the larger providers. You have other things like analytics, building machine learning algorithms, and so forth. You maybe have data lakes, Hadoop, et cetera. You're running SQL queries. You have Python, TensorFlow, et cetera, about the data science side. How do all those components fit together into the solution? What are you doing in-house, and where is SQream interfacing with these other components of the data management stack?

Matan: In terms of components in SQream, we're talking about SQream DB. This is our main database. This is our main product. We also have what we're building right now. It's going to be the lake house. We're going to talk about it soon. But in terms of components, we have the engine. This is the query engine, which is the essence of our technology. We have the storage manager compiler, which is actually in charge of optimizing the SQL, actual SQL on SQream DB. We have a lot of connectivity around it. This is the ecosystem around it.

After bringing the raw data from a lot of places to the lake, for example. Then you use the warehouse in the middle and query whatever you need. Usually, you connect it to some BI tools like Tableau or Power BI. For that, we have a whole suite of connectivity based on JDBC, or ODBC, or some native connectivity, which allows the end user which can be a data engineer who actually uses Tableau or other BI tools in order to get his insights and do these reports and dashboards.

Erik: Okay. If we go back to those use cases that you touched on earlier, one of them intuitively feels to me like real-time data is more important. If we're talking about manufacturing environment, you're really trying to react to events in real-time or, at least, relatively real-time. Then in the other case, it sounds like it might be more dealing in days or weeks, maybe even months, but with very large amounts of data trying to extract business insights. Are there different architectures? Are there different solutions that would be necessary for a manufacturer versus a telco, versus a bank, versus an airline?

Matan: The answer is, yes, the regulation is different. We need to make sure we match. Different industries have different regulation if we're talking about InfoSec. Today, obviously, this is a very big thing, especially for this type of customers. So, this is very sensitive. There's a difference between them.

In terms of real-time, obviously, this is analytical database. This is not real-time, but sometimes it gets closer to real-time. For that, this is no different in terms of the deployment between a telco or a manufacturing. Just the use cases are different and what they do with the data. As you mentioned, some of our customers are building. After all the data is ready or the raw data already came to the warehouse. We did all the preparation on the building machine learning algorithms out of it, help them to make decision. Some of them are just building reports and trying to analyze months of data back. So, it more depends on the use case. In terms of RM, as I mentioned, between the industries, there are different regulations that we need to match. Some industries are more on time. Some industries already are moving to the cloud, which usually makes the scale easier. This is one of obviously the benefits of a cloud application. This is mostly the difference.

Erik: What about when it comes to different types of data — structured data versus unstructured data like video or sound waves? Are there particular datasets that you prefer to focus on, is more suitable for the structure of your database?

Matan: We mostly concentrate on structured data, coming from a warehouse that has been moving into the lake house. In the future of data analytics, we are opening ourselves to new semi-structured and unstructured data. So, we're going to be able to support other use cases besides the traditional data warehouse use cases. Actually, we started with semi-structured. This is what's going to be in the beginning of next year. Hopefully, they're on to fully unstructured data types as well.

Erik: Okay. I got to ask you that. Is lake house, is that like a combination of warehouse and data lake?

Matan: Yes, you got it.

Erik: Okay. It's first time I've heard that. It makes sense.

Matan: We can touch that point. Would be happy.

Erik: So, what would that mean? I guess, data lake is more pool of — it can be more unstructured, maybe a warehouse more structured. So, what would a lake house mean in terms of how that would be differentiated in structure from a data lake and warehouse?

Matan: Warehouse, the most important thing is the fact we are separating the compute from storage. With warehouse, you had to ingest all your data into the warehouse, right? This is why they call it the warehouse. But everybody are using lakes. We're talking about S3. This was the biggest thing. I think AWS brought it. The fact that you can throw all your data types from whatever it comes on one place, on one lake. Now, with the lake house, you can query the lake itself. You don't need to move it to a third location, and wait and pay for double the storage. Obviously, you're losing all this communication time and network time. So, you can just query your lake. This is what they call a lake house.

I got to say, this is the biggest trend in the data analytics world. We see all the big players trying to launch their own solution and their own answer for the lake house. We are among them. We're planning to launch our own lake house offering. Beginning of 2023, it's going to be native on the cloud, which is a very big step for us as an on-prem company, having a native cloud SaaS solution. We're really looking forward to that.

We're actually studying with the use, most of use cases is going to be around preparation. We saw that, for example, in ad tech, they are collecting a lot of data, and they are using cloud. We see a lot of use cases where they need to prepare the data and move it from one place to another and transformation. So, we are starting our offering with that preparation of formation platform. Later of 2023 and later in the future, in the next few years, we are planning to have a full lake house solution, that hopefully one day are going to replace our on-prem solution.

Erik: Okay. Interesting. I'm looking at your website now. Is that Panoply, the platform? That's a low code platform. Maybe that's a different solution.

Matan: No, it's not. I'm sharing with you some insights that are not public yet. It's not on the website yet. Panoply, this is a company we've acquired, I think, almost a year ago. Great team with a great product. Actually, they're helping businesses to create a warehouse with no code. This is our no-code solution for a lower level of data, data customers that actually one is department stores, some CFOs that will like to create some data dashboards. They don't have data engineers team, a team of data engineers, that can help them create whatever dashboards and know how to use SQL with Panoply. They can just click on a few buttons and create their own report. This is a different, obviously, use case from SQream DB, that eventually they're going to be joining together, but they're still working on that.

Erik: Well, let's talk a bit more about the people in the loop here. Because that's actually more where I'm coming from. It's working with the business side, working with the business, sales, marketing, manufacturing, etcetera, who understand day-to-day operations and where the use cases are originating.

Often, their teams are relatively small. Their data science teams are relatively small. Often, maybe also relatively junior. So, they might have a couple of 20 somethings. Somehow, they need to use that to translate a need into a result.

Then maybe somewhere in the organization, there is a much more sophisticated mature team. But often, actually accessing those resources can be difficult. There's a time bottleneck and so forth. Maybe it sounds like you already have a couple of different approaches based on the acquisition of Panoply. But who would be the typical users? If somebody wants to use a solution like this, what kind of capabilities do they have to have in their team in order to make effective use of a solution like SQream?

Matan: First of all, this is an SQL database. So, the end users know SQL. Usually, those are data scientists or data engineers. They are in charge of this thing. I always separate between — there are two approaches. Probably, it's not only for tech, but only thing I know it's tech.

Sometimes you're going to a potential customer, and trying to explain him why he needs your solution, how he can benefit from your solution. The other approach is, they're already looking for this type of solution. You're selling. You just need to show them that you are the best or the best for them. Lucky for us, there are so many big — the other players in our market, as you mentioned at the beginning, are huge companies. From all the cloud providers all the way to Snowflake, only the biggest and most successful IPO on Wall Street. It just shows that the market is already educated. We just need to prove why we are better.

So, you asked about the end user. Usually, they know SQL. Sometimes also, Python, this is very common with data scientists. They're very familiar with the tools, the BI tools, and ETL processes. But they're usually another decision makers for us, but they are the end users. From a product perspective, we're trying to think about both. We have the one challenge about how we're going to sell it. Then we need to make sure that the end user that actually uses it, after we already make this out, he needs to be happy. It's not two different approaches, but we need to make sure we're aiming to the right message, to the right receiver.

Erik: Would the budget usually be coming out of the CIO function or the CTO function? Is it often coming out of a business unit, or maybe an engineering for manufacturing organization?

Matan: We usually see the CIO level. Not specifically the CIO, the C level. We're talking about big budgets, a very long commitment, and usually the CIO. This is the one that actually easily pushes it inside of organization and actually understands the needs of replacing, or adding, or using SQream. This is why there's such a big difference between the decision maker to the end user.

Erik: Yeah, I suppose this is not a bottom-up solution. This is much more of a top-down solution.

Matan: Unfortunately, enough. I'm a big fan of bottom-up solutions and bottom up go-to market strategies. At least, in on-prem, this is not very common.

Erik: Help me understand how this would work in terms of the end — because, I guess, top down, you have a set of top-down requirements that are generally aligned on. But then you also have a more or less infinite number of ways that you could be using the data in this database to serve different teams. So, you say, okay, organization. We have access to this database and the ability to extract insights. Then there's all sorts of a factory general manager, head of a business unit, who says, "Hey, I think we can extract insights that would be useful for me." Somehow, they have to translate those into requirements, send them to the data science team, and then translate those back into solutions.

Can you share some best practices for how that works? Because, from my perspective, I'm often working with the teams that are bottom-up — with the head of a business unit in Asia, something like this. I feel a lot of pain, where we might have a great team somewhere in the world. We might have great infrastructures somewhere in the world. But in terms of a VP's ability to access those resources and get something built on a reasonable timeframe, still, there's a fair bit of pain there. Do you have any best practices that you see working with companies that are successfully able to, let's say, maximize the benefit of putting this infrastructure in place?

Matan: Best practice is hard. Always, we have a very technical and big delivery team that actually works with the customers on implementing and on training sometimes. As I mentioned, there's a lot of optimization that needs to happen. It depends on the use case. As you mentioned, a lot of small tweaks that you need to know, how to use it. So obviously, we have a lot of technical documentation that we are making sure this is also happening in the responsibility of my team in the product, to make sure we're writing and updating all this technical documentation.

A lot of it is also working with our delivery team. There's a lot of discovery happening in the POCs. We jump into POC, we understand the actual use case. Our delivery team actually give them a white glove experience to really understand their needs and how we can even get better, and how we can optimize the hardware that they acquired even more than that, which hardware they should buy.

Obviously, we are working on utilizing the GPU. GPU is not a very common thing these days. So, we need to make sure people are ordering it in advance. We help them to get their hands on it if they need any help. So, it’s very specific to a use case, especially when we're talking about 10 petabytes of scale.

Between me and you, and our listeners, sometimes we are also intrigued by our customers pushing our boundaries. Nobody sells a database telling, "All right. You can use it up to 10 petabytes." Yesterday, we used 10 petabytes. In a month, it can be 15. We need to be ready for it, especially there's no best practice for someone that's scaling. I don't know. 50% of their data in three months. We need to be very creative. Honestly, this is part of the magic that happens here at SQream. The fact of who we are, we can react and we can give our customers this kind of experience.

Erik: Yeah, just since you mentioned them, I'm curious on the hardware side. Have you had to also get creative over the past 12 months in terms of managing situations where a company is not able to acquire the hardware they need? Or is it generally just maybe a matter of delaying something a month or so but generally you're able to kind of address the need on the hardware side?

Matan: Oh, so creative. From getting another partnership. Because whatever potential customer want to work with some storage vendor that we never met with him, now we need to create a relationship in order to win that opportunity. So, we're building this relationship in a month. Sometimes we are convincing, or we want to go ahead with the POC. We don't want to wait on the customer. We don't want to wait 60 days till you're going to get all the hardware in his factory. So, we're convincing him to do this POC on Cloud, for example.

For some of our customers, this is super new. Maybe this is their first actual interaction with cloud. We bring in our partners from the cloud providers — from Google, or from AWS, or from Oracle, whoever needed — to make sure it happens. So, creativity is very — at least, the way I see it, it's very startup-oriented. We have to be this way in order to play next to those competition that we have today.

Erik: Okay. Let me ambush you with a creativity question here, then. So, you know that I'm sitting in China. I don't know how closely you've been following this. The US has recently sanctioned certain types of chips coming into China, which means the Chinese are going to have to get very creative in terms of how they utilize older chips to accomplish machine learning and so forth.

I don't know how familiar you are with that. But basically, seven nanometer and so forth are not probably going to be available in China for, I don't know, 10 years. Who knows? Do you anticipate that being really a significant bottleneck for their ability to build complex the data structures that they would need to do very sophisticated AI, or do you see just ways that they can architecture around that?

Matan: Okay. First of all, this is just my opinion. I don't want to get too political here. But if it's going to be a problem, I think it will. This is why they do it. They want to create this problem. At least, in the short term, I'm sure that China will overcome this challenge. One of our investors is Alibaba, talking about cloud providers in China. So, I'm pretty sure that they are working on a solution for that. I think if someone in China is going to experience this lack of hardware from the West, I hope it's going to be only in the short term. Then hopefully, in a year or two, China is going to be able to catch up and have their own hardware.

We are also looking into other options, not just get enough close only on the GPU. Other types of special hardware, we're looking into. By the way, also, GPUs are not very common these days, especially since COVID. We're doing more than that. We used to go only with the strongest hardware we can get. There's a new GPU from NVIDIA. Let's recommend all our customers to buy it. Because a stronger GPU will bring them or whatever. Stronger hardware will probably be going to result to better performance.

But we mentioned that this is a cost-performance game. Not necessarily the best hardware is going to make sense in terms of the business. So, we're always making sure to do our benchmarks, not only on the latest hardware. We're also taking older hardware that sometimes you can get the 2x on the performance, but the price is 4x. In those kinds of cases, maybe you would prefer to buy an older hardware. Because as we mentioned, this is not a real-time decision. Usually, it can be up to close to real time. So, if something takes a minute or two minutes but it's going to cost four times, it's not necessarily going to make sense.

Erik: Yeah, I got it. It makes sense. Last question, then, from my side. You've already mentioned lake house is a concept that you're excited about in the future. Anything else over the next, let's say, three to five years that you're particularly excited about in this area?

Matan: I think, for me, this is the biggest question that's on the table. They're all moving to the cloud. So, before, we're talking about lake house. First of all, there is a shift of data analytics to the cloud. It happened already a few years ago. We see Redshift and BigQuery from GCP. Obviously, Snowflake. You cannot talk about data without mentioning Snowflake. We see that super successful in the cloud.

But when you're reading articles, you see that — I know most of them are talking about 80% of data still on-prem. Only now we see companies shifting to the cloud little by little. We still don't see a lot of businesses that runs 10 petabytes on the cloud.

But you asked me what's going to happen in the next five years. I'm sure everything is going to shift to the cloud. The way to go there is hybrid. Hybrid, meaning not just a double offering cloud and on-prem, but they're going to be connected. So, if you update something on on-prem, it will be updated on the cloud and the other way around. I think this will be the middle step towards cloud. Maybe it will take 10 years to get there. But I have a lot of faith that, at some point, we will get there, and people are going to stop by their own storage and place it in their work, in some other corner of the office, of the building, and protect it.

Erik: Yeah, it seems to make sense. It's just a matter of timing, but I think the economics are there. Also, from a security standpoint, people might feel secure. But the reality is, you're much better off having AWS control your data than your IT team.

Matan: Exactly. Before we mentioned scale, and auto-scale, right?

Erik: Yeah. Great. Well, Matan, anything else that you feel is important for folks to know?

Matan: I don't think so. It was a lot of fun.

Erik: All right. Awesome. Well, thank you.

Matan: Erik, thank you. Well, great meeting you. Thank you for hosting me.

Contact us

Let's talk!

* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that IoT ONE may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from IoT ONE.
Submit

Thank you for your message!
We will contact you soon.