ugur elveren's blog

tech

In this article, I will discuss the Saga design pattern. First, we'll address some technical challenges of creating ACID transactions. Then, we'll explore the Saga design pattern and how to implement it.

Database Per Service Pattern

First, let's talk about microservice architecture and the database per service pattern. Basically, each microservice in an application has its own dedicated database. This allows each microservice to manage its data independently, making it easier to develop, deploy, and scale each service without affecting the others. For instance, in an online store, the user service, order service, and inventory service each have their own databases. This isolation improves flexibility, as each service can choose the best database type for its needs and makes the system more resilient to failures.

However, there are some drawbacks to this pattern. Data consistency and distributed transactions are two of them. Since each microservice operates independently, it can lead to potential inconsistencies when updates occur across multiple services. Data duplication across the system can also lead to synchronization issues. Each transaction needs to be distributed across the system. If a failure occurs, rollbacks must happen across the system, leading to another problem: distributed transactions.

What is the Saga Design Pattern?

The Saga design pattern is a method for managing long-running transactions in a microservice architecture by breaking down a complex transaction into a series of smaller, independent steps, each handled by a different service. If any step fails, compensating transactions are executed to undo the changes made by previous steps, ensuring data consistency across all services. This pattern can be implemented through choreography, where each service triggers the next step via events, or orchestration, where a central coordinator directs each step. First let’s talk about Choreography.

Read more...

At my company, we hold a weekly “Lunch and Learn” event that I really like. It lets us share our experiences and expertise. Recently, during a chat with my colleagues, I got some basic questions about dependency injection (DI). This made me think that it would be a good idea to use one of these sessions to go over DI with the team. Also, I plan to write an article about dependency injection and its best practices. In the article, I'll explain what DI is, how to use it effectively, and what the best practices are.

Read more...

The circuit breaker pattern stops a service from trying again to call another service when the previous attempts have failed multiple times. It's similar to electrical circuit breakers that automatically cut off the current when there's abnormal activity.

In a distributed environment, calls to remote resources may fail due to reasons such as application exceptions, timeouts, authentication issues, or overloaded systems. Usually, resilient cloud applications automatically fix these issues over time, and the calling application manages these errors using a retry pattern.

However, in some cases, these failures can persist, like when a service is down or systems are consistently overloaded. Excessive retries can create a cascading effect, overloading the same resource and impacting other resources as well. Repeated calls can impact both cost and performance.

At this stage, circuit breaker patterns come into play to address this issue. When the callee retries, it begins to assess the problem. If there's a specific error or if the error count surpasses the limit, circuit patterns activate, breaking the communication between the caller and the callee.

Read more...

The cloud is vast. Azure docs have around a hundred thousand pages, and AWS is just as big. Other cloud providers are out there too. Each gives you lots of apps, different rules, and dozens of integrations, so creating cloud-native ones has its challenges.

They're not identical, but big providers offer similar solutions. For example, Google Cloud Storage is like AWS S3 and Azure's durable function is similar to AWS step functions. Details and rules can differ, but the main idea of the tech is similar, along with the challenges.

Because problems are alike, solutions are too. We can group similar solutions and make templates for each group. Like a cooking recipe guides you to a tasty dish, these templates can be our guide to perfect solutions.

Read more...

Azure Functions and Azure Durable Functions

Microsoft Azure is a constantly expanding collection of cloud services. Among these services, Azure Functions and Azure Durable Functions are critical in enabling serverless computing. In this article, we will look at Azure Functions and Azure Durable Functions, distinguish between the two, and learn about the key concepts surrounding Azure Durable Functions.

Read more...

In my previous blog post, I created a script that controls the fan speed on my Raspberry Pi. However, I encountered a problem – every time I restarted my Raspberry Pi, I had to manually run the script again, which was not ideal.

In this blog post, I will demonstrate how to automate the execution of any script using Systemd. Systemd is a system and service manager for Linux operating systems.

There are several methods to run software as a background service in Linux, such as using crontab or .bashrc, among others. If you're interested in those methods, you can easily find more information online.

Read more...

Last week, I did something unplanned: I bought a Raspberry Pi without any specific project in mind. I came across a Mastodon account (@rpilocator@mastodon.social) that helps people locate Raspberry Pis and I decided to get one. And now, here it is! I'm writing my first blog post about the Raspberry Pi Fan Control.

Read more...

Hello there! The prefix sum technique involves creating an array where the prefix[i] is the sum of all elements up to index i. This technique can also be referred to as the cumulative suminclusive scan, or simply scan.

prefix[0] = nums[0]
prefix[1] = nums[0] + nums[1]
prefix[2] = nums[0] + nums[1] + nums[2]
prefix[i] = nums[0] + nums[1]+ nums[2] + .. + nums[i]

For example, if the original array is [1, 2, 3, 4], the prefix sum array would be [1, 3, 6, 10].

Time Complexity

The time complexity of Prefix Sum is O(n) since we need to iterate through the input array once all the items in the array. But after the prefix sum array is computed, we can use it to answer subarray sum queries quickly, in constant time. It allows us to find the sum of any subarray in O(1).

If we want to find the sum of i to j, the answer is prefix[j] - prefix[i] + nums[j];

Problem: Finding subarray sum with Prefix Sum

Given an array nums[] of size N. Given Q queries and in each query given L and R, Print the sum of array elements from index L to R.
Read more...

Window Sliding Technique is a strategy that aims to reduce nested loops for solving problems where you need to analyze a sequence of elements, like an array or a string. The technique reduces the use of a nested loop and replaces it with a single loop, reducing the time complexity.

The sliding window technique is efficient because it avoids unnecessary computations. By moving the window only one step at a time, you avoid repeating calculations already done for the previous window. This can save a lot of time and make the algorithm more efficient.

This approach is useful in solving problems that involve finding a subarray or substring that meets a certain condition, such as the maximum sum of a subarray or the longest substring without repeating characters. By sliding the window over the input sequence, the algorithm can efficiently explore all possible subarrays or substrings and identify the ones that meet the given condition.

The longest sub-array having a sum is less than k.

Given an array of positive integers nums and an integer k, find the
length of the longest subarray whose sum is less than or equal to k.

The problem supposes you have an array of positive numbers and a target number, k. You want to find the longest possible subarray (a contiguous sequence of elements) in the array whose sum is less than or equal to k.

Input: arr[] = { 3, 1, 2, 4, 5, 9 }, k = 10 Output: 4 Explanation: The sub-array is {3, 1, 2, 4}.

Solution

Create a window of elements by moving the right pointer to the right until the desired size or condition is met. If the sum of the elements in the window exceeds the given integer k, we need to adjust the window to the right. We do this by moving the left end of the window one step to the right and subtracting the element that was previously at the left end of the window from current. We repeat this process as many times as needed until the sum of the elements in the window is less than or equal to k.

At each iteration, we update the answer variable with the maximum length of the subarray seen so far. We calculate this as the difference between the current right index and the left index. We continue iterating over the array until we reach the end.

public int FindLengthOfLongestSubarray(int[] nums, int k) {
    int left = 0;
    int current = 0;
    int answer = 0;

    for (int right = 0; right < nums.Length; right++) {
        current += nums[right];
        while (current > k) {
            current -= nums[left];
            left++;
        }
        answer = Math.Max(answer, right - left + 1);
    }
    return answer;
}

Fixed Size Sliding Window

The fixed sliding window problem is a specific type of problem that requires finding a solution within a fixed-size window of elements in an array or sequence. This means that the size of the window remains constant throughout the problem.

The maximum sum of any subarray of size k

Given an array of integers and a fixed window size of k, find the
maximum sum of any subarray of size k.

Solution

To solve this problem using the sliding window technique, we would start by initializing two pointers, left and right, to the beginning of the array. We would then create a window of elements by moving the right pointer to the right by the size of the window.

Next, we would calculate the sum of the elements in the window. We would then store this sum in a variable, say max_sum.

We would then slide the window to the right by incrementing the left and right pointers by one, subtracting the element at the left end of the old window from the sum and adding the element at the right end of the new window to the sum. We would then update the max_sum as necessary by comparing it to the sum of the new window:

public int MaxSumSubarray(int[] arr, int k)
{
    int left = 0;
    int right = k - 1;
    int maxSum = 0;
    int currSum = 0;

    // Calculate the sum of the first window
    for (int i = 0; i <= right; i++)
    {
        currSum += arr[i];
    }
    maxSum = currSum;

    // Slide the window and update the maximum sum as necessary
    while (right < arr.Length - 1)
    {
        currSum -= arr[left];
        left++;
        right++;
        currSum += arr[right];

        if (currSum > maxSum)
        {
            maxSum = currSum;
        }
    }

    return maxSum;
}

Conclusion

In conclusion, the sliding window technique is a useful algorithmic pattern that can be applied to a wide range of problems in which we need to find a subarray or substring of a given array or string that satisfies certain constraints.

The technique involves creating a “window” of a fixed size or a variable size that slides through the input data, with the goal of finding the optimal solution or the longest/shortest subarray/substring that satisfies a specific condition.

Fixed-size sliding window problems are easier to solve as we only need to maintain a window of a fixed size, while variable-size sliding window problems require us to adjust the window size dynamically based on the problem constraints.

Overall, the sliding window technique provides a simple and efficient way to solve a variety of problems, particularly those that involve searching for a continuous subset of data that meets specific criteria.

#Tech

What Is The Two Pointer Technique?

The two-pointer is an easy technique used to solve some array-related problems. It involves using two pointers, one starting from the beginning of the array and the other starting from the end of the array, to traverse the array and find a solution. This technique is helpful because it reduces the time complexity of the algorithm and increases its efficiency.

The two-pointer technique is used in various solutions such as finding the sum of two numbers in an array that equals a given target, finding the length of the longest subarray with a given sum, and finding the shortest subarray with a given sum. The basic idea behind this technique is to start from the two ends of the array and move the pointers towards each other until a solution is found or it becomes clear that a solution does not exist.

Two Sum Problem

Two Sum, one of the famous interview questions can be solved with two pointers technique.

Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target.

You may assume that each input would have exactly one solution, and you may not use the same element twice.

You can return the answer in any order.

The Solution of Two Sum Problem with Two Pointers Technique

We initialize two pointers left and right to the start and end of the array, respectively. We calculate the sum of the numbers at left and right and compare it with the target. If the sum is equal to the target, we have found two numbers that add up to the target, so we return their indices. If the sum is less than the target, we increment the left pointer. If the sum is greater than the target, we decrement the right pointer. If the loop ends without finding a solution, we throw an exception.

public int[] TwoSum(int[] nums, int target) 
{
    int left = 0, right = nums.Length - 1;
    while (left < right) 
    {
        int sum = nums[left] + nums[right];
        if (sum == target) 
        {
            return new int[] { left, right };
        } 
        else if (sum < target) 
        {
            left++;
        } 
        else 
        {
            right--;
        }
    }
    throw new ArgumentException("No two sum solution");
}

“Two-pointer technique” and its application to problems involving two arrays

The two-pointer technique can also be applied to problems involving two arrays. In this scenario, the two pointers are used to traverse the two arrays simultaneously, with one pointer moving through each array. The goal is to find a solution that satisfies a given condition, such as finding a pair of elements with a given sum or finding the common elements between two arrays.

The Intersection Question

Let's consider an example of finding the common elements between two sorted arrays. In this problem, we are given two sorted arrays, and our goal is to find the elements that are common to both arrays. To solve this problem using the two-pointer technique, we start with two pointers, one at the beginning of each array, and compare the elements pointed to by the two pointers. If the elements are equal, we have found a common element, and both pointers are advanced one step. If the element in one array is greater than the element in the other array, the pointer pointing to the greater element is advanced one step. The process continues until one of the pointers reaches the end of its array.

The Solution of Intersection Problem with Two Pointers Technique

public List<int> Intersection(int[] nums1, int[] nums2)
{
    int i = 0, j = 0;
    List<int> result = new List<int>();
    while (i < nums1.Length && j < nums2.Length)
    {
        if (nums1[i] == nums2[j])
        {
            result.Add(nums1[i]);
            i++;
            j++;
        }
        else if (nums1[i] < nums2[j])
        {
            i++;
        }
        else
        {
            j++;
        }
    }
    return result;
}

Note: Assumes that the input arrays are already sorted.

Initialize two-pointers i and j: The two-pointers, i and j, are initialized to zero. They will be used to traverse the arrays nums1 and nums2, respectively. Create a result list result: A list result is created to store the elements that belong to both arrays. Start a while loop: The loop starts with the condition that both i and j are less than the length of their respective arrays. Compare the elements: Inside the loop, the elements pointed by i and j are compared. If they are equal, the element is added to the result list and both pointers are incremented. If the element pointed by i is less than the element pointed by j, only i is incremented. If the element pointed by j is less, only j is incremented. Repeat the loop: The loop continues until either i or j reaches the end of the array. Return the result: Finally, the result list is returned as the result.

Conclusion

In conclusion, the two-pointer technique is a highly effective algorithm for solving a variety of problems in computer science. It involves using two pointers that traverse a data structure, such as an array or list, in opposite directions to search for a solution. By utilizing this technique, developers can efficiently find solutions to problems with linear time complexity, making it an ideal method for optimizing performance and streamlining problem-solving. With its versatility and ease of use, the two-pointer technique is an indispensable tool for any C# developer, allowing them to quickly and effectively tackle complex challenges with confidence.

#Tech