ugur elveren's blog


The cloud is vast. Azure docs have around a hundred thousand pages, and AWS is just as big. Other cloud providers are out there too. Each gives you lots of apps, different rules, and dozens of integrations, so creating cloud-native ones has its challenges.

They're not identical, but big providers offer similar solutions. For example, Google Cloud Storage is like AWS S3 and Azure's durable function is similar to AWS step functions. Details and rules can differ, but the main idea of the tech is similar, along with the challenges.

Because problems are alike, solutions are too. We can group similar solutions and make templates for each group. Like a cooking recipe guides you to a tasty dish, these templates can be our guide to perfect solutions.

What is a design pattern?

In simple terms, a design pattern is a handy, repeatable solution for a common problem in making software. It's not a complete design you can directly turn into code. Instead, it's like a guide or template, describing how to solve a problem that fits various situations in software design.

What is a cloud design pattern?

In simple words, a cloud design pattern is a solution we can use over and over for common issues in cloud computing. People often use these patterns to make things scalable, reliable, and secure in the cloud.

What are the common challenges in cloud development?

Cloud development has lots of benefits. It makes scaling and adapting to market changes easy for organizations. Plus, it offers better reliability and improved disaster recovery. The pay-as-you-go model also cuts upfront costs.

Moreover, the cloud provides easy access to advanced technologies, even without in-house expertise. This allows organizations to automate IT tasks, easing the load on IT teams and giving them more time.

Cloud development has its challenges. Things like keeping things secure, managing costs, making sure it can grow when needed, and connecting different services can be tricky. Organizations need to tackle these challenges.

Let's explore each topic to understand them better.


Scaling is a big plus in the cloud. Organizations adjust resources as needed, paying for what they use, keeping costs low and solutions reliable.

However, it's not always smooth. Complex cloud setups, old systems, or not enough know-how can be challenges. These might affect reliability or lead to cost issues.

Before the cloud, there were stories. Like, organizations aired TV commercials in prime time, and after, customers saw an HTTP 500 error on their computers. Not ideal!


Cloud providers set up many data centers in different places, so even if one has a problem, others can keep things going. Also, in the cloud, there are backup components ready to jump in if needed.

However, using the cloud often means dealing with systems spread out everywhere. This makes it tricky to handle communication, consistency, and reliability among different parts. While the cloud lets you grow easily, not planning how to grow or sudden high demand can cause problems, affecting how reliable services are.

Even with backup plans, cloud services can go down sometimes. Things like hardware issues, software problems, or even attacks can happen, impacting how well applications work.


Security is important in cloud development for organizations using cloud-based tools. Even though cloud providers give security features, there are still challenges that organizations must tackle to keep their cloud environments safe.

Challenges include data breaches, misconfigurations, and identity management problems. Steps like encrypting sensitive data, having strong access controls, and clear policies are vital to keep the cloud secure.

Data Management

Handling data in the cloud has lots of benefits over on-prem solutions. It lets organizations be more flexible, scalable, and cost-effective. Automated backups and recovery options make sure data stays safe. Also, cloud services often provide advanced analytics tools, helping in extracting valuable insights from the data. Being scalable helps manage big loads of data easily.

But, dealing with data in the cloud comes with challenges. Organizations need to ensure data integrity and address concerns about security, and compliance. Making sure rules about data are the same across different cloud setups can be hard, especially if an organization uses many clouds or a mix of cloud and old-school systems. To keep sensitive data safe, there need to be strong security measures, like controls, encryption, and tools to prevent data loss. Bringing data from old systems or other clouds into the cloud can also be challenging, needing careful planning and syncing.


Cloud integration brings numerous benefits to organizations, helping them link different apps, smoothen data flows, and enhance overall business processes. It's like having a well-organized toolbox for projects, making things run more smoothly, scalable, and efficient.

While offering numerous benefits, there are challenges. Getting old systems into the mix, especially if they're outdated, needs time and effort. This might mean custom integrations to connect them to cloud apps. Cloud integration also involves linking apps and systems that use different ways of talking and different rules. This can make it hard to map data, ensure things work together, and ensure compatibility, and maintain data integrity during integration. Managing data ownership, governance policies, and data lineage across multiple cloud environments can be challenging as well.


Monitoring cloud operations is crucial in modern cloud development. It gives organizations the info they need to keep things running well, find problems, and make sure everything in their cloud apps and systems is healthy and safe. It's like having a set of eyes always looking out for you.

But, monitoring comes with challenges. Bringing in data from different sources, including cloud systems and external sources, can be complex. Monitoring systems produce a lot of data, leading to alert fatigue and difficulty in identifying critical issues. Plus, monitoring systems themselves need to be secured from unauthorized access and data breaches.

What is Next?

In the next article, I'll explain cloud design patterns one by one. I'll use real-world examples to make things clear. We'll connect each pattern with one of the challenges we talked about. By the end, this will give us more know-how on each topic and a better understanding of the challenges.

#Tech #Cloud #CloudDesignPatterns

Azure Functions and Azure Durable Functions

Microsoft Azure is a constantly expanding collection of cloud services. Among these services, Azure Functions and Azure Durable Functions are critical in enabling serverless computing. In this article, we will look at Azure Functions and Azure Durable Functions, distinguish between the two, and learn about the key concepts surrounding Azure Durable Functions.


In my previous blog post, I created a script that controls the fan speed on my Raspberry Pi. However, I encountered a problem – every time I restarted my Raspberry Pi, I had to manually run the script again, which was not ideal.

In this blog post, I will demonstrate how to automate the execution of any script using Systemd. Systemd is a system and service manager for Linux operating systems.

There are several methods to run software as a background service in Linux, such as using crontab or .bashrc, among others. If you're interested in those methods, you can easily find more information online.


Last week, I did something unplanned: I bought a Raspberry Pi without any specific project in mind. I came across a Mastodon account ( that helps people locate Raspberry Pis and I decided to get one. And now, here it is! I'm writing my first blog post about the Raspberry Pi Fan Control.


Hello there! The prefix sum technique involves creating an array where the prefix[i] is the sum of all elements up to index i. This technique can also be referred to as the cumulative suminclusive scan, or simply scan.

prefix[0] = nums[0]
prefix[1] = nums[0] + nums[1]
prefix[2] = nums[0] + nums[1] + nums[2]
prefix[i] = nums[0] + nums[1]+ nums[2] + .. + nums[i]

For example, if the original array is [1, 2, 3, 4], the prefix sum array would be [1, 3, 6, 10].

Time Complexity

The time complexity of Prefix Sum is O(n) since we need to iterate through the input array once all the items in the array. But after the prefix sum array is computed, we can use it to answer subarray sum queries quickly, in constant time. It allows us to find the sum of any subarray in O(1).

If we want to find the sum of i to j, the answer is prefix[j] - prefix[i] + nums[j];

Problem: Finding subarray sum with Prefix Sum

Given an array nums[] of size N. Given Q queries and in each query given L and R, Print the sum of array elements from index L to R.

Window Sliding Technique is a strategy that aims to reduce nested loops for solving problems where you need to analyze a sequence of elements, like an array or a string. The technique reduces the use of a nested loop and replaces it with a single loop, reducing the time complexity.

The sliding window technique is efficient because it avoids unnecessary computations. By moving the window only one step at a time, you avoid repeating calculations already done for the previous window. This can save a lot of time and make the algorithm more efficient.

This approach is useful in solving problems that involve finding a subarray or substring that meets a certain condition, such as the maximum sum of a subarray or the longest substring without repeating characters. By sliding the window over the input sequence, the algorithm can efficiently explore all possible subarrays or substrings and identify the ones that meet the given condition.

The longest sub-array having a sum is less than k.

Given an array of positive integers nums and an integer k, find the
length of the longest subarray whose sum is less than or equal to k.

The problem supposes you have an array of positive numbers and a target number, k. You want to find the longest possible subarray (a contiguous sequence of elements) in the array whose sum is less than or equal to k.

Input: arr[] = { 3, 1, 2, 4, 5, 9 }, k = 10 Output: 4 Explanation: The sub-array is {3, 1, 2, 4}.


Create a window of elements by moving the right pointer to the right until the desired size or condition is met. If the sum of the elements in the window exceeds the given integer k, we need to adjust the window to the right. We do this by moving the left end of the window one step to the right and subtracting the element that was previously at the left end of the window from current. We repeat this process as many times as needed until the sum of the elements in the window is less than or equal to k.

At each iteration, we update the answer variable with the maximum length of the subarray seen so far. We calculate this as the difference between the current right index and the left index. We continue iterating over the array until we reach the end.

public int FindLengthOfLongestSubarray(int[] nums, int k) {
    int left = 0;
    int current = 0;
    int answer = 0;

    for (int right = 0; right < nums.Length; right++) {
        current += nums[right];
        while (current > k) {
            current -= nums[left];
        answer = Math.Max(answer, right - left + 1);
    return answer;

Fixed Size Sliding Window

The fixed sliding window problem is a specific type of problem that requires finding a solution within a fixed-size window of elements in an array or sequence. This means that the size of the window remains constant throughout the problem.

The maximum sum of any subarray of size k

Given an array of integers and a fixed window size of k, find the
maximum sum of any subarray of size k.


To solve this problem using the sliding window technique, we would start by initializing two pointers, left and right, to the beginning of the array. We would then create a window of elements by moving the right pointer to the right by the size of the window.

Next, we would calculate the sum of the elements in the window. We would then store this sum in a variable, say max_sum.

We would then slide the window to the right by incrementing the left and right pointers by one, subtracting the element at the left end of the old window from the sum and adding the element at the right end of the new window to the sum. We would then update the max_sum as necessary by comparing it to the sum of the new window:

public int MaxSumSubarray(int[] arr, int k)
    int left = 0;
    int right = k - 1;
    int maxSum = 0;
    int currSum = 0;

    // Calculate the sum of the first window
    for (int i = 0; i <= right; i++)
        currSum += arr[i];
    maxSum = currSum;

    // Slide the window and update the maximum sum as necessary
    while (right < arr.Length - 1)
        currSum -= arr[left];
        currSum += arr[right];

        if (currSum > maxSum)
            maxSum = currSum;

    return maxSum;


In conclusion, the sliding window technique is a useful algorithmic pattern that can be applied to a wide range of problems in which we need to find a subarray or substring of a given array or string that satisfies certain constraints.

The technique involves creating a “window” of a fixed size or a variable size that slides through the input data, with the goal of finding the optimal solution or the longest/shortest subarray/substring that satisfies a specific condition.

Fixed-size sliding window problems are easier to solve as we only need to maintain a window of a fixed size, while variable-size sliding window problems require us to adjust the window size dynamically based on the problem constraints.

Overall, the sliding window technique provides a simple and efficient way to solve a variety of problems, particularly those that involve searching for a continuous subset of data that meets specific criteria.


What Is The Two Pointer Technique?

The two-pointer is an easy technique used to solve some array-related problems. It involves using two pointers, one starting from the beginning of the array and the other starting from the end of the array, to traverse the array and find a solution. This technique is helpful because it reduces the time complexity of the algorithm and increases its efficiency.

The two-pointer technique is used in various solutions such as finding the sum of two numbers in an array that equals a given target, finding the length of the longest subarray with a given sum, and finding the shortest subarray with a given sum. The basic idea behind this technique is to start from the two ends of the array and move the pointers towards each other until a solution is found or it becomes clear that a solution does not exist.

Two Sum Problem

Two Sum, one of the famous interview questions can be solved with two pointers technique.

Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target.

You may assume that each input would have exactly one solution, and you may not use the same element twice.

You can return the answer in any order.

The Solution of Two Sum Problem with Two Pointers Technique

We initialize two pointers left and right to the start and end of the array, respectively. We calculate the sum of the numbers at left and right and compare it with the target. If the sum is equal to the target, we have found two numbers that add up to the target, so we return their indices. If the sum is less than the target, we increment the left pointer. If the sum is greater than the target, we decrement the right pointer. If the loop ends without finding a solution, we throw an exception.

public int[] TwoSum(int[] nums, int target) 
    int left = 0, right = nums.Length - 1;
    while (left < right) 
        int sum = nums[left] + nums[right];
        if (sum == target) 
            return new int[] { left, right };
        else if (sum < target) 
    throw new ArgumentException("No two sum solution");

“Two-pointer technique” and its application to problems involving two arrays

The two-pointer technique can also be applied to problems involving two arrays. In this scenario, the two pointers are used to traverse the two arrays simultaneously, with one pointer moving through each array. The goal is to find a solution that satisfies a given condition, such as finding a pair of elements with a given sum or finding the common elements between two arrays.

The Intersection Question

Let's consider an example of finding the common elements between two sorted arrays. In this problem, we are given two sorted arrays, and our goal is to find the elements that are common to both arrays. To solve this problem using the two-pointer technique, we start with two pointers, one at the beginning of each array, and compare the elements pointed to by the two pointers. If the elements are equal, we have found a common element, and both pointers are advanced one step. If the element in one array is greater than the element in the other array, the pointer pointing to the greater element is advanced one step. The process continues until one of the pointers reaches the end of its array.

The Solution of Intersection Problem with Two Pointers Technique

public List<int> Intersection(int[] nums1, int[] nums2)
    int i = 0, j = 0;
    List<int> result = new List<int>();
    while (i < nums1.Length && j < nums2.Length)
        if (nums1[i] == nums2[j])
        else if (nums1[i] < nums2[j])
    return result;

Note: Assumes that the input arrays are already sorted.

Initialize two-pointers i and j: The two-pointers, i and j, are initialized to zero. They will be used to traverse the arrays nums1 and nums2, respectively. Create a result list result: A list result is created to store the elements that belong to both arrays. Start a while loop: The loop starts with the condition that both i and j are less than the length of their respective arrays. Compare the elements: Inside the loop, the elements pointed by i and j are compared. If they are equal, the element is added to the result list and both pointers are incremented. If the element pointed by i is less than the element pointed by j, only i is incremented. If the element pointed by j is less, only j is incremented. Repeat the loop: The loop continues until either i or j reaches the end of the array. Return the result: Finally, the result list is returned as the result.


In conclusion, the two-pointer technique is a highly effective algorithm for solving a variety of problems in computer science. It involves using two pointers that traverse a data structure, such as an array or list, in opposite directions to search for a solution. By utilizing this technique, developers can efficiently find solutions to problems with linear time complexity, making it an ideal method for optimizing performance and streamlining problem-solving. With its versatility and ease of use, the two-pointer technique is an indispensable tool for any C# developer, allowing them to quickly and effectively tackle complex challenges with confidence.