Mary Jo FoleyModeratorJune 19, 2019 at 11:28 am #618021
Our next MJFChat, scheduled for Monday, June 24, is between me and Rimma Nehme, Azure Cosmos DB Product Manager and Architect. The topic of our chat, unsurprisingly, is Microsoft’s Azure Cosmos DB.
Whether you’ve already dabbled with Cosmos DB or are just curious about how it works, how it stacks up against the competition and more, “Cosmonaut” Rehme is ready to answer any and all questions about the service.
What questions do you have for Rimma about Cosmos DB? No question is too big or too trivial. I’ll be chatting with her on June 24 and will ask some of your best questions directly to her. Just add your questions below and maybe you’ll be mentioned during our next audio chat.
Brad SamsKeymasterJune 27, 2019 at 10:20 am #618167
You can find the audio playback here.
Mary Jo Foley: 00:04 Hi, you’re listening to Petri.com MJFChat show. I Am Mary Jo Foley, aka your Petri.com community magnet. I’m here to interview industry experts about various topics that you are readers and listeners want to know about. So today’s MJFChat is all about Cosmos DB, Microsoft’s Azure Cosmos DB NoSQL database. That’s a mouthful. My guest today is Azure Cosmos DB Product Manager and Architect Rimma Nehme. Thanks Rama for coming on the chat.
Rimma Nehme: 00:40 Sure, Mary Jo. Nice to be here this morning.
Mary Jo Foley: 00:43 Thank you very much. I want to do in this chat in particular, kind of like a little demystification of a product that I think some people, kind of over anticipate is going to be too complex and something they can’t get their hands around. If I’m going to start out by saying, if you have one minute to explain Cosmos DB to someone like doing an elevator pitch, what would you say?
Rimma Nehme: 01:11 Oh, very simple. It’s like SQL for the cloud.
Mary Jo Foley: 01:14 That’s it. Well, that’s, that’s a great way to explain it.
Rimma Nehme: 01:17 Yeah. So Cosmos DB is the database service that was worn in the cloud. As the cloud-native database service, it tries to bring all of the value in the promise of the cloud in its own form factor as a database service.
And so when we think about, what does it mean to be cloud-native? I usually think of three core properties that are fundamental to the cloud core design center, which is global distribution, you know, given that cloud is everywhere, wherever users are, whether it’s, you know, North America, Asia Pacific, Europe, Latin America, and so the database has to be everywhere, wherever the users are. The second one is the promise of elasticity and basically computational resources on demand. You know, the fact that you can come to the cloud and ask for storage or compute on demand whenever you need to, as much as you want to. We bring the same capabilities in the form factor of the database.
The third core property of the cloud is ultimately the multi-tenancy and this notion of very, very fine-grained resource governance. What that means is that we take the same physical hardware and by virtue of a lot of, you know, engineering rigor and the right architecture, we put multiple shards with copies of the data on exactly the same servers and machines.
What it allows us to do is just drive the utilization of the physical hardware. And by virtue of driving that utilization, we can pass on all of the cost savings to the customers so that they don’t have to buy that hardware capacity in their own data centers. And it’s something that is also very hard to achieve if you’re doing it in the VMs, you know, as a hosted solution. So these three core capabilities, being ubiquitous, being elastic in terms of storage or compute, and then ultimately passing on the savings by virtual of resource governance, multitenancy and resource utilization, we can give it back to the customers. These are the kind of core principles.
Mary Jo Foley: 03:41 All right. Got It. Would you say that the kinds of customers who should be looking at Cosmos DB fall into any particular size or industry? Like should it only be enterprise level customers or is there a case you could make for even an SMB to be looking at this technology?
Rimma Nehme: 04:01 Yeah, the way I put it, it’s actually database for any developer or any customer or any enterprise actually if you will, you know, just like SQL in 1990s, but today, you know, we are all in the cloud era. You know, whether you are a startup, whether you are an SMB, whether you’re a large enterprise, you’re facing with exactly the same challenges, which is trying to, store your data, get the guaranteed performance for your data.
If your data, we’ll continue to grow. You know, if it’s a startup or, and the startup that becomes a Unicorn, chances are your data will continue to grow from gigabytes into terabytes and potentially even into petabytes. If you are a gaming development company and you game becomes viral, all of a sudden you’ll get burst in terms of a number of users that you have to support or the number of operations or request per second that you need to support.
Rimma Nehme: 05:03 And this is something, whether you are in a gaming industry or in the retail industry or even in the insurance industry, consumer goods or financial services, the way we see these bursts in terms of number of users, number of operations, storage size, these are ubiquitous. So look at the modern data platform requirements.
The way I put it, scalability, performance, elasticity, high availability, and then the flexibility of dealing with any type of data, whether it’s documents, graph, or whatnot and trying to be closer to where your users, regardless of the geographical location. These are all core fundamental properties regardless of whether you know, it’s a large enterprise or a small unicorn startup.
Mary Jo Foley: 06:00 So when you started out about Cosmos DB being born in the cloud and a cloud-native database. So for people who have heard about Azure SQL, how does it differ from that? Because I know SQL wasn’t born in the cloud, but a lot of people, I think when you say to them, does Microsoft have a cloud database? They say, yeah, Azure SQL.
Rimma Nehme: 06:23 Maybe a little bit of the history behind the service could help.Cosmos DB project started in 2010 as project Florence. I think I might have told you the story behind it before. We named it Florence for two reasons. One is the official and the other one is unofficial. The official one, if anyone of the or your listeners have ever been to Florence, Italy, there is a famous dome by Brunelleschi. It is viewed as the epitome and the jumpstart of the renaissance movement because he pioneered a lot of the architecture or design principles that are viewed as sort of the beginning of the renaissance. And we wanted this to be the renaissance for data on the cloud. And so we named the project as a Project Florence.
Rimma Nehme: 07:25 But the other sort of unofficial reason, Dharma Shukla who is the founder and the technical fellow, he’s the founder of Cosmos DB service, was in vacation in Florence and it is anecdotally where the first version of the code worked. So we called in Project Florence. When we started in 2010 it was intended first to address the critical developer pain points that were faced by internal Microsoft applications. You know, the likes of Office 365, Xbox, universal store. Today we also have LinkedIn running on Cosmos DB and also Yammer.
We are onboarding now GitHub on Cosmos DB. All of the problems that these applications were facing were scale and performance at scale. Again, trying to meet developers and users wherever they are. Because Microsoft is also a global company and we have operations and customers anywhere worldwide.
Rimma Nehme: 08:37 We observed that many of these problems are not unique to Microsoft applications. They are ubiquitous among the third parties, among our customers, and the customers who are also coming into Azure. We set out to build this service trying to address again the needs that I’ve described earlier. This is the system that is capable to elastically scale.
As your data size continues to grow as you computational needs vary over a period of time the system we’ll elastically scale to the needs so you don’t have to incur all of those pain points and you focus on your application rather than managing and maintaining your backend, especially as your data size and your computational needs continue to increase. Being able to provide very, very strict upper bounds on the latency on the performance.
Rimma Nehme: 09:41 To date we are the only service that actually gives the SLA on the latency at the 99th percentile. So again, if you’re building an application and performance is super critical, you don’t have to worry about it. You don’t have to worry about, you know, optimizing your indices, you schemas so that you can get and extract a little bit more performance here and there.
The service takes care of it for you. Being able to provide also, five nines high availability regardless of the software, regardless of the machines, regardless of the network failures, regardless of the regional disasters. Again, if you were to try to do it yourself, it puts a lot of burden on the customers, on the developers. They either have to really design their system with high availability in mind. They have to put a lot of redundancies, a lot of copies of their data, keeping the data consistent across multiple copies.
Rimma Nehme: 10:37 It all again, adds to that burden. And the pain point that ultimately we wanted to take away from the customer so that they can focus on the apps. Then being able to provide what I call Schema agnostic experience to the applications. The idea here is your obligation logic will continuously keep on evolving.
You know, you start out building, maybe an operational application, let’s say maybe for a retail, like a real-time payment processing or personalization or customer 365. Over time your application logic will continuously change. You might want to bring additional data sources into your application. If your backhand and your database is very rigid in terms of that fast evolution and agility of the application, it becomes again, yet another burden that goes on the developer or the customer to keep the application logic in sync with your database.
Rimma Nehme: 11:45 You know, with the Schema and constantly evolve and keep its performance and highly available. And so we wanted to make it so flexible that you throw any data at it, it will happily absorb and it will automatically index all of the data on a by default. So you don’t have to again worry about doing schema management, index management and just focus on your application logic.
And then the last but not the least, we wanted to meet the customers where they are regardless of their adoption of the technology or the languages or the stack and keep their investments wherever they are. And ultimately, instead of picking for instance, the approach of setting out and saying there is one language of Cosmos DB and this is how you should design your application, how you should manage your data, this is the data model that you should use. We ultimately said we want to meet wherever the customers are.
So if they want to use various data models, by all means we will natively support them. If they want to use open source languages and open source APIs for instance, Mongo DB, Cassandra, we will meet them there as well and enables sort of the best of both worlds, the open source as well as the cloud-native capabilities. And so as a result of that, the service became multi-model and Multi Api service as well.
Mary Jo Foley: 13:16 So I wanted to ask you about that because I know one of the top, if you were saying like top points that everyone from Microsoft mentions about Cosmos DB, you always hear them talk about the multimodal approach. But I was curious to kind of go back to no SQL also because sometimes I see cosmos db described as a no SQL database, but it seems like it’s different from the typical no SQL database because it also can handle like relational data too, right?
Rimma Nehme: 13:45 Yeah. So the way I put it, most people think when they think about no SQL, they typically start thinking about the data model and the language first. Whereas actually, the roots of the no SQL systems came from being able to handle data at scale. So if I were to describe what is no SQL , first and foremost, it’s scale.
It’s a scale-out system. And when you want to deal with data at scale, it fundamentally requires you to think about how you model that data, how you interact with that data slightly differently. You know for instance, you will not see, you know, typical canonical, characteristics of relational databases like primary foreign key constraints in scale-out systems because in those ecosystems, because it’s in this violent disagreement with scale versus preserving the semantics. When you want to continue to scale and be scalable to petabytes of data, to trillions of operations per second, no SQL systems typically will never the scale to preserve those semantics.
Rimma Nehme: 14:54 And so that kind of, you know, in a sense you’re trading off different capabilities because ultimately, you know, this is sort of the use cases and scenarios that you’re going after. So in that regard, Cosmos DB is a no SQL system in the sense that it’s designed for scale-out, workloads in scale out in terms of, again, both storage as well as compute.
So if your data size continues to grow from gigabytes into terabytes into petabytes, it will seamlessly scale out. Similarly, as you are computational needs fluctuate or increase and you go, let’s say from tens to hundreds to thousands to millions of operations per second. The backend of the service will also scale. And then on top of that, we add this what I call syntactic sugar, which is the various data models, the various API languages on top of the service to be able to interact with various types of data. In a sense all of these new systems like new SQL systems, they sort of take a similar approach where fundamental core design architecture is a scale-out, no SQL like architecture. And then on top of that, you add various data models, supports, and various API languages to interact with the data.
Mary Jo Foley: 16:21 Okay. I remember, um, the predecessor to Cosmos DB was Microsoft’s no SQL database called Document DB. Right? So that was kind of the heritage.
Rimma Nehme: 16:30 Yes. So when we started out in the early days, we started as Project Florence. We took a subset of the capabilities of Florence and first manifested in the form factor of Document DB with just the document data model support. But in parallel behind the scene, we’ve been basically battled testing other data model supports other extensions of the global replication and multi-master capabilities. And then in 2017, you know, as an aggregate of all of these capabilities, we’ve launched it as a Cosmos DB service.
Mary Jo Foley: 17:07 Something we haven’t really talked about yet, I think you and I’ve talked about this in the past is how Cosmos DB handles consistency and why that’s different from the way many database administrators might think about consistency. Could you give us a little kind of high-level explanation of how that is different?
Rimma Nehme: 17:30 Sure, sure. So when it comes to the consistency it’s actually a very fascinating topic, it actually deserves a session its own right. But when it comes to consistency, if you look at the market of operational databases to date, you will typically notice this what we call dichotomy between traditional systems and traditional no SQL systems. Where on the one hand, most of the traditional relational databases typically offer you strong consistency or also we call it as perfect consistency, where at any point in time, whenever the application requests the data, you always get the most freshest up to date view with respect to all of the recent updates. That typically comes at the cost of availability, performance, also latency implications because you just need to run a lot more processing in order to provide these guarantees with respect to strong consistency.
Rimma Nehme: 18:39 On the other hand, most of the traditional no SQL systems, MongoDB, Cassandra DB and many others were targeting a different type of applications like web, mobile, ware, potentially the highest sort of high order bid. A goal that you’re going after is latency, very, very fast serving of the data and availability.
So if your building for instance, recommendation website and people are putting comments about products, it’s very important to serve them quickly. But if one individual comment is missing, potentially it’s not the end of the deal. You know, it’s not, not a big problem, but latency is super, super important. So with traditional no SQL systems, they typically emphasize more of a week of consistency or eventual consistency in order to gain low latency and high availability. And for us, when we approach this data consistency problem, uh, the first inside came to us is that, um, instead of viewing it as too extreme binary choices, strong versus eventual, we’ve used data consistency as a spectrum of choices instead of the extremes.
Rimma Nehme: 20:00 Strong consistency and eventual consistency are ends of the spectrum. But there are many consistency choices along the spectrum. And developers, customers can use these options to make precise choices and granular tradeoffs with respect to their own applications. Something that makes the most sense for them and picks the right tradeoff between high availability and performance.
And so in addition to these two strong and eventual, we also implemented three intermediate consistency models that includes a bounded staleness session, consistent prefix and then eventual. To make it very intuitive, we put this animation inside the portal that I believe one of the actually readers asks you to ask me about it. So I highly recommend, you know, anybody can go into the portal and click on the default consistency tab. And we show there a musical note animation where if you were to write on data in one region, depending on the consistency model, we show the animation, how other regions will view and get that data.
Rimma Nehme: 21:13 And so it makes it a very, very visceral understanding wise. You know, how the data actually will manifest itself depending on the consistency model that you choose. And again, we made it simple in a sense by a virtue of clicking on either in the portal or making a single API call, you can change it at any point in time.
In the insight, remember I told you about the spectrum of choices, all more than 93% of our customers are actually using these intermediate consistency models instead of the two extremes. So he was actually a validation of that initial hunch and the initial hypothesis that it was the right approach towards approaching the data consistency as a spectrum.
Mary Jo Foley: 22:03 Speaking of customers, I know, you know, Microsoft can’t disclose names of customers without their prior approval, but I was curious if you could talk about, or maybe give a couple of examples of scenarios where a customer ended up choosing Cosmos DB because of a particular need or a scenario. So we talked about the applications that Microsoft has that run on Cosmos DB and there’s a growing family of those. But when you’re looking out at customers, is there like anything you could say as a trend or a general indicator of the types of problems that people see Cosmos DB as uniquely set up to fulfill?
Rimma Nehme: 22:49 Yes. There are a number of verticals and industry segments that absolutely love what Cosmos DB has to offer and some of them include retail, e-commerce, consumer goods, IUT scenarios from various industries, whether it’s automotive or airlines or industrial IOT or connected buildings or connected to anything. We also see a lot of adoption in financial services, in logistics and sort of anything that has to do with moving things. We also see a lot of adoption in oil and energy and utilities.
The other sector that we see is entertainment media and gaming. What attracts all of these customers is there are some scenarios that are specific to their businesses and the industries that they’re in, and some could be distilled to sort of more technical scenarios.
Rimma Nehme: 24:09 What they absolutely love is the elasticity. If I were to pick, for instance, retail and e-commerce, elastic scale is extremely attractive to e-commerce retail and consumer groups. This is not a surprise because we all know about this event called black Friday, cyber Monday, right? All of a sudden all of these customers need the ability to scale sometimes 10x to 100x of their normal, you know, traffic and their normal workload patterns. Typically if they were to do this either on premises, they would have to buy basically the physical servers, the machines that are needed to sustain that capacity, that peak capacity. And so what that means is that almost eight to six months out of the year, they’ve already paid for that hardware and they’re not using it.
Rimma Nehme: 25:10 And similar approach, if they were to do it in the IAS using VMS, typically four to six months before the event, they will set up their configuration to make sure that everything is ready.
And again, they’re spending the money on the capacity that they’re not utilizing and if they’re off by any orders of magnitude or even slightly, it would impact both their revenue as well as also a perception from their own customers if the services aren’t available. So this elasticity just in time whenever they need these capabilities becomes super, super important. And also doing it in a geo-replicated sense because many of these customers could have users also in different geographical regions in different continents and different countries, so with the predictable latency characteristics with the predictable performance. Again, they don’t have to worry about the experience that their customers will incur.
Rimma Nehme: 26:14 So it gives that peace of mind with very aggressive guarantees in terms of performance, scale, high availability that again, they can focus on the upper tier logic of their business application, like serving better recommendations maybe providing some personalization, maybe integrating with other third-party tools and scenarios because their data is always there whenever they needed. What I’ve noticed also this phenomenon of black Friday and cyber Monday is actually prevalent in every single industry. It just depends on the time of the year. It depends on the industry that you’re in.
For instance, with insurance, it’s typically January because a lot of people start a new year and they start signing up for new policies. So all of a sudden you see that burst. For example, for companies that deal with, for instance, let’s say spring gardening, you know, we have big, big customers like that. When people start preparing for spring and they need to buy, let’s say lumber, they need to buy soil, they need to buy mulch and all that other stuff.
We see a lot of burst with those customers as well. In financial industry, again, whenever you see the bursts in terms of some offers becoming more popular than others, or if you’re running a campaign, all of a sudden you see a burst. With gaming, which is when you do the game lounge, this is your black Friday, you know, and so this burstiness and fluctuations and being able to elastically scale becomes super, super attractive for customers again.
It becomes like a turnkey solution that they don’t have to go again, pause their application, add more clusters, add more nodes, the application is down, then they have to go resurrect it. None of that. It becomes very elastic and online. As a result of that customers really, really like it.
Mary Jo Foley: 28:21 Nice. Last question and it might be a kind of a silly question, but I still want to ask it anyway. Do you ever see a day when Cosmo DB will actually completely replace SQL server or Azure SQL? If you do when and if you don’t, why not?
Rimma Nehme: 28:43 Oh, that’s a big one. It’s not a silly question.
Mary Jo Foley: 28:48 The reason I thought it might not hold is because the two things aren’t really the same. I mean they are both databases and a database service, but I, I was kind of like, would there ever be a day when Microsoft would retire SQL server?
Rimma Nehme: 29:05 I can’t comment on that. But the way I put it the convergence SQL or versus no SQL, where I put it, it’s a wrong conversation in this as even the term itself. No SQL I would put it as a fad.
Mary Jo Foley: 29:26 They’re like serverless, right?
Rimma Nehme: 29:29 The way I put it, the fundamental needs that the customers are facing today is scale, performance at scale and the guaranteed performance at any scale. Elasticity, high availability of their data, being able to be wherever the, you know, their businesses, their users are, which is, you know, globally, because at the end of the day, the world is becoming flat as they always say it. So there are these fundamental core tenants of how we’re dealing with the data in the modern age.
Rimma Nehme: 30:08 So the conversation at that point, SQL versus no SQL and being religious about it, I feel like it’s a wrong question to a wrong problem. I guess the problem we’re trying to address is trying to fundamentally solve the customer problems in the modern age. The other thing is also going forward, the interesting things will happen with the data at scale. Whether it’s AI, IOT or any type of insight that you’re seeking out of your data, bringing it, you know, trying to handle more and more. The data that is coming from your devices, your Internet of things, from Twitter feeds to Facebook statuses from customer service, from call centers, loyalty programs. You know, somebody said that I don’t know the official number but around 90% of the world’s data was generated only in the last two years.
Rimma Nehme: 31:07 And largely it comes from, semi-structured, unstructured data and basically machine-generated data, voice over IP, IOT and whatnot. Given this data explosion, while we have all these tools, there is still a huge gap between the available data and the ability to do something with that data. And ultimately that’s the gap that we’re trying to fill in.
The conversation of SQL versus no SQL, where I put it, that dichotomy sooner or later actually is going to go away. We see this convergence in the sense that at the end of the day you are dealing with data, small scale, big scale, it doesn’t matter. You’re trying to get some meaningful insights out of it and either provide a service to your customers or try to find means for differentiating your business or trying to monetize on it or if it helps all the third parties.
Rimma Nehme: 32:06 So in that regard, in my lifetime, or at least in the professional career, there are a lot of legacy systems that probably isn’t much high ROI in goring and aggressively trying to move them to this cloud-native service. Because they’ll stay where they are. You know, they’re largely as they are. But for any new endeavors, any new applications, especially if you anticipate and you want to do something at large scale, you know, Cosmos DB become sort of a defacto go-to solution.
Because in the sense it presents the cloud promise in the database form factor. I don’t know if that answered your question or not, but just like music in the 1960s, whatever the music was that came out, that was viewed as different. I think when I retire, I think the world will be very, very different.
Mary Jo Foley: 33:10 Good, good point. Well, thank you very much for this chat Rimma. For all of you regular listeners, we’re going to be back in a couple of weeks with our next guest, so be sure you watch for that. I’ll post the information on petri.com and that’ll be your signal listeners to send in your questions. All you have to do is go to the MJFChat area in the forums on Petri and submit your questions there in regard to this chat with Rimma, look for the audio and the transcript of it as with all of our chats in the next few days. Thank you again so much. Thank you, Mary Jo. Thank you, everyone.
You must be logged in to reply to this topic.