Advertisement
sepi0l

hpc_bubble

Apr 25th, 2024
589
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
C++ 5.66 KB | None | 0 0
  1. #include<iostream>
  2. #include<omp.h>
  3.  
  4. using namespace std;
  5.  
  6. void bubble(int array[], int n){
  7.     for (int i = 0; i < n - 1; i++){
  8.         for (int j = 0; j < n - i - 1; j++){
  9.             if (array[j] > array[j + 1]) swap(array[j], array[j + 1]);
  10.         }
  11.     }
  12. }
  13.  
  14. void pBubble(int array[], int n){
  15.     //Sort odd indexed numbers
  16.     for(int i = 0; i < n; ++i){    
  17.         #pragma omp for
  18.         for (int j = 1; j < n; j += 2){
  19.         if (array[j] < array[j-1])
  20.         {
  21.           swap(array[j], array[j - 1]);
  22.         }
  23.     }
  24.  
  25.     // Synchronize
  26.     #pragma omp barrier
  27.  
  28.     //Sort even indexed numbers
  29.     #pragma omp for
  30.     for (int j = 2; j < n; j += 2){
  31.       if (array[j] < array[j-1])
  32.       {
  33.         swap(array[j], array[j - 1]);
  34.       }
  35.     }
  36.   }
  37. }
  38.  
  39. void printArray(int arr[], int n){
  40.     for(int i = 0; i < n; i++) cout << arr[i] << " ";
  41.     cout << "\n";
  42. }
  43.  
  44. int main(){
  45.     // Set up variables
  46.     const int n = 500;
  47.     int arr[n];
  48.     int brr[n];
  49.     double start_time, end_time;
  50.  
  51.     // Create an array with numbers starting from n to 1
  52.     for(int i = 0, j = n; i < n; i++, j--) arr[i] = j;
  53.     // Sequential time
  54.     start_time = omp_get_wtime();
  55.     bubble(arr, n);
  56.     end_time = omp_get_wtime();    
  57.     cout << "Sequential Bubble Sort took : " << end_time - start_time << " seconds.\n";
  58.     printArray(arr, n);
  59.    
  60.     // Reset the array
  61.     for(int i = 0, j = n; i < n; i++, j--) arr[i] = j;
  62.    
  63.     // Parallel time
  64.     start_time = omp_get_wtime();
  65.     pBubble(arr, n);
  66.     end_time = omp_get_wtime();    
  67.     cout << "Parallel Bubble Sort took : " << end_time - start_time << " seconds.\n";
  68.     printArray(arr, n);
  69. }  
  70.  
  71.  
  72.  
  73. /*
  74. This code demonstrates a parallel implementation of bubble sort using OpenMP (Open Multi-Processing),
  75. a widely used API for parallel programming in C/C++.
  76.  
  77. **Overview of Parallel Bubble Sort:**
  78. - Bubble sort is a simple sorting algorithm that repeatedly steps through the list, compares
  79.   adjacent elements, and swaps them if they are in the wrong order. This process is repeated until the list is sorted.
  80. - In the parallel implementation provided, the array is divided into two parts: one part for odd
  81.   indexed elements and the other for even indexed elements. Each part is sorted independently in parallel.
  82. - After sorting the odd indexed elements and the even indexed elements separately, a barrier
  83.   synchronization is applied to ensure that all threads complete sorting before proceeding to sort the other set of elements.
  84. - This parallelization aims to exploit the independent nature of sorting odd and even indexed
  85.   elements, allowing for concurrent execution.
  86.  
  87. **Explanation of the Code:**
  88. 1. **`bubble()` Function:** This function implements the traditional sequential bubble sort algorithm,
  89.     which sorts an array in ascending order.
  90.  
  91. 2. **`pBubble()` Function:** This function implements the parallel bubble sort. It sorts
  92.       odd indexed elements and even indexed elements separately.
  93.     - The outer loop iterates through the entire array.
  94.     - The first inner loop (odd indexed elements) starts from index 1 and goes up
  95.       to the second-to-last element (index `n - 2`) with a step of 2. It checks if the
  96.       current element is smaller than the previous one and swaps them if necessary.
  97.     - After sorting the odd indexed elements, a barrier (`#pragma omp barrier`) ensures
  98.       synchronization among threads before sorting the even indexed elements.
  99.     - The second inner loop (even indexed elements) starts from index 2 and goes up to
  100.       the last element (index `n - 1`) with a step of 2. It performs the same swapping operation for even indexed elements.
  101.     - The `#pragma omp for` directives are used to distribute loop iterations among available threads.
  102.  
  103. 3. **`printArray()` Function:** This function simply prints the elements of an array.
  104.  
  105. 4. **`main()` Function:** In the `main()` function:
  106.     - An array of size `n` is initialized with numbers from `n` to 1 (in reverse order).
  107.     - Sequential bubble sort is performed on the array, and the time taken is measured
  108.       and printed along with the sorted array.
  109.     - The array is reset to its original order.
  110.     - Parallel bubble sort is performed on the array, and the time taken is measured and
  111.       printed along with the sorted array.
  112.  
  113. Overall, this code demonstrates a basic parallelization strategy for bubble sort using OpenMP,
  114. leveraging parallelism to potentially improve the sorting performance on multi-core systems.
  115.  
  116. The time complexity of the parallel bubble sort algorithm remains the same as
  117. that of the sequential bubble sort algorithm in the worst-case scenario, which is
  118. 𝑂(𝑛^2).
  119.  
  120. In bubble sort, regardless of whether it's executed sequentially or in parallel,
  121. each element in the array needs to be compared with every other element. In the worst case,
  122. this results in 𝑛 iterations of the outer loop, where n is the number of elements in the array.
  123. In each iteration of the outer loop,  𝑛−1 comparisons are made in the inner loop. Thus, the total
  124. number of comparisons made in the worst case is 𝑛 × ( 𝑛 − 1 ), which simplifies to 𝑂(𝑛^2).
  125.  
  126. While parallelization can speed up the sorting process by allowing multiple comparisons to occur
  127. simultaneously, it does not change the fundamental nature of the algorithm or its time complexity.
  128. The time complexity remains quadratic because the number of comparisons scales with the square of
  129. the input size. Additionally, parallelization introduces overhead, synchronization, and
  130. load-balancing considerations, which may affect the actual runtime but do not change the theoretical time complexity.
  131. */
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement