Product ,

All you need to know about Yelowsoft’s new version update

All you need to know about Yelowsoft’s new version update

Updated on October 26, 2020
8 min read

At Yelowsoft, our team is highly dedicated to achieve excellence in all the work that we do. We have achieved excellence in all our works in the past by continuously reinventing ourselves. This continuous process of reinvention has now become our trademark.

yelowsoft-releases-new-main

Continuing the same habit of reinvention and innovation we have now come up with all-new version update of our taxi solution. This newer version update makes our taxi solution fast, improved, efficient and advanced. So, what is this newer version all about? Let’s have a look.

Why did we move to the newer version update?

The main reason for moving to the newer version update was to achieve faster speed and to make our solution more robust and advanced. The taxi solutions have to deal with a massive amount of data on a real-time basis. It’s because it receives, process, and sends a huge amount of data in real-time. If we talk in technical terms then the below three processes are always taking place simultaneously.

  • Real-time ingestion: – Receiving data in the real-time

  • Real-time processing: – Processing the received data in real-time

  • Real-time update: – Sending the processed data in real-time

Let’s understand this process with an example. Suppose, the system receives a booking request. Now, the system has to find the nearest driver for that request. For that to happen the system requires the real-time location of all the drivers, this data received in real-time is what we call as real-time ingestion.

Now, once the system receives the request, it will process it to find the nearest driver which is known as real-time processing. Now, once the system finds the nearest driver, it will notify that driver about the request which will come under real-time update.

We had to come up with an update that can seamlessly manage all these data in real-time without compromising on the speed of the solution. To achieve this we made several changes which are going to discuss one by one.

Microservices architecture

In our newer version update, we have come up with microservices architecture. First, let’s understand what Microservices are?

  • Microservices are tiny, independent, and loosely coupled. In this, every service is a separate codebase that can be seamlessly managed by a small team of developers.

  • Developers can deploy these services independently. This enables the team to update any existing service without redeploying or rebuilding the entire application.

  • The communication between services takes place via well-defined APIs. However, the details of the internal implementation of each service are kept hidden from the other services.

Now, that we have understood microservices, let’s have a look at what are its benefits.

Smaller code base

In the case of a monolithic application, the code dependencies become too tangled. Due to which, you need to require to touch code in a lot of places for adding a new feature. Microservices architecture minimizes these dependencies by not sharing the data stores or codes which ultimately makes it easy to add new features.

A mix of technology stacks

The microservices architecture helps you pick the technology which is best suited for your service. You can also use a combination of the technology stack.

Scalability

Microservices are great when it comes to scalability as it enables you to scale services independently. With this, you can scale subsystems that need more resources without scaling the whole application.

We used microservices architecture in our newer update because it enabled us to divide work into multiple services such as:

  • The driver services will take care of the work related to the drivers.
  • The auth services will take care of all the authentication work.
  • The matching service will match the ride request by finding the best-suited driver.

With microservices architecture in place, we were able to identify the services which were the most occupied. Moreover, it also enabled us to enhance that particular services which in turn made the system faster.

This was a huge bonus for us as earlier in the monolithic architecture, we had to enhance the whole server to make enhancements in any of the particular services.

Segmentation of data as per the use case

We segmented data as per their use case and ensured that they are stored separately. This separation of storage ensured that real-time operations don’t get affected when anyone generates a report from the data. As per the use case, we divided data into three categories:

Reporting data

We used the reporting database and ensured that all the reporting data is stored in the reporting database itself.

Streaming data

To store streaming data we have used the streaming database as there is a continuous stream of data that we were dealing with. We used Kafka for the streaming database which enables streaming of driver’s location and trips.

Master data

We kept master data into the master database. We kept such data in this database which is not meant to lose such as driver information.

Switched from Python to Node.js

Before making the decision to move to Node.js, we compared both Python and Node.js and found out that Node.js had more support as it was JavaScript. Apart from this, the main advantage that Node.js had over Python was the fact that it is asynchronous.

Asynchronous processing is a must-have for all real-time applications especially for something like a taxi application where a massive amount of data has to be executed in real-time. Let’s understand what asynchronous processing is with a simple example.

yelowsoft-releases-new-switch.jpg

Let assume that you’re on standing the top of a mountain with 1000 balls. And you have to push all those 1000 balls at the bottom of the mountain in the least possible time. Now, here it’s quite obvious that you can’t push all those 1000 balls at once. So, you’re only left with two options.

The first option is to push them one by one. In this, you can push one ball and wait for the ball to reach at the bottom to push the next one. However, with this technique, you will take a long time to finish your task.

Now there’s a second option, in which you can push balls one by one without having to wait for them to reach the bottom. With this technique, you can push all those 1000 balls in the least time possible.

In this example, the first technique is the synchronous execution and the second one is asynchronous. The above-mentioned hypothetical example makes it clear that the asynchronous execution is faster than the synchronous execution.

Now, let’s see how asynchronous execution helps to boost the server performance?

Assume that a ball in the above example is equivalent to one query to the database. So, whenever process data in synchronous execution for a massive project that has many aggregations, queries, etc. then it would simply block the code execution.

However, if you’re processing it in asynchronous execution, then you can just execute all the queries at once and then collect the results afterwards.

Caching layer

We also added a caching layer where we decided to put data which is frequently used in real-time processes like finding the nearest driver and many other processes. This enabled us to collect the data which is used in real-time processes directly from the caching layer thus eliminating the need to go the database.

The addition of the caching layer helped us to make our system faster for many processes as less amount of time was consumed for retrieving data from the caching layer as compared to that of the database.

Moreover, we also ensured that every time there’s an update in the database, the caching layer should also get updated. To ensure this we carried out several tests in numerous scenarios and finally succeed in achieving our goal.

We also ensured that our system is not totally dependent on the caching layer. In case our caching layer goes down then the system would still be working and it would fetch all the required data from the database to the caching layer automatically.

App level

The above points which we mentioned were from the server-side. However, we also had to make major changes from the App side as well. So, let’s discuss these changes one by one.

Lower battery consumption

We made our application light by using lightweight servers which made the connections lighter. We also use code optimization for making our solution light. Secondly, we divided data into several segments, so the system can take the required segment of data instead of taking the whole bunch of data. By this way, we were able to avoid the multiple API calls.

Low network usage

Apart from lowering the battery consumption, the lightweight of our application also enhanced its performance in the low network.

Making the mobile application faster

To make the mobile application faster, we decided to get only what was needed instead of getting the entire bunch of objects. Moreover, we also decided to cache the recurring object and took it from the local cache of mobile rather than going to the server which boosted the performance of our application.

This was all about the new version update of Yelowsoft’s taxi solution. We’ll be coming up with lots of more product updates in near future, till then keep reading this space.

author-profile

Mushahid Khatri

Mushahid Khatri is the Chief Executive Officer of Yelowsoft, one of the leading taxi dispatch and on-demand delivery solution providers. He is a visionary leader who believes in imparting his profound knowledge that is leaned on business and entrepreneurship.

Related Post

Don’t Wait, Begin Your On-Demand Journey Today!

Get a free demo of all our solutions by simply filling out your details in the form.