Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- #include<iostream>
- #include<omp.h>
- using namespace std;
- void bubble(int array[], int n){
- for (int i = 0; i < n - 1; i++){
- for (int j = 0; j < n - i - 1; j++){
- if (array[j] > array[j + 1]) swap(array[j], array[j + 1]);
- }
- }
- }
- void pBubble(int array[], int n){
- //Sort odd indexed numbers
- for(int i = 0; i < n; ++i){
- #pragma omp for
- for (int j = 1; j < n; j += 2){
- if (array[j] < array[j-1])
- {
- swap(array[j], array[j - 1]);
- }
- }
- // Synchronize
- #pragma omp barrier
- //Sort even indexed numbers
- #pragma omp for
- for (int j = 2; j < n; j += 2){
- if (array[j] < array[j-1])
- {
- swap(array[j], array[j - 1]);
- }
- }
- }
- }
- void printArray(int arr[], int n){
- for(int i = 0; i < n; i++) cout << arr[i] << " ";
- cout << "\n";
- }
- int main(){
- // Set up variables
- const int n = 500;
- int arr[n];
- int brr[n];
- double start_time, end_time;
- // Create an array with numbers starting from n to 1
- for(int i = 0, j = n; i < n; i++, j--) arr[i] = j;
- // Sequential time
- start_time = omp_get_wtime();
- bubble(arr, n);
- end_time = omp_get_wtime();
- cout << "Sequential Bubble Sort took : " << end_time - start_time << " seconds.\n";
- printArray(arr, n);
- // Reset the array
- for(int i = 0, j = n; i < n; i++, j--) arr[i] = j;
- // Parallel time
- start_time = omp_get_wtime();
- pBubble(arr, n);
- end_time = omp_get_wtime();
- cout << "Parallel Bubble Sort took : " << end_time - start_time << " seconds.\n";
- printArray(arr, n);
- }
- /*
- This code demonstrates a parallel implementation of bubble sort using OpenMP (Open Multi-Processing),
- a widely used API for parallel programming in C/C++.
- **Overview of Parallel Bubble Sort:**
- - Bubble sort is a simple sorting algorithm that repeatedly steps through the list, compares
- adjacent elements, and swaps them if they are in the wrong order. This process is repeated until the list is sorted.
- - In the parallel implementation provided, the array is divided into two parts: one part for odd
- indexed elements and the other for even indexed elements. Each part is sorted independently in parallel.
- - After sorting the odd indexed elements and the even indexed elements separately, a barrier
- synchronization is applied to ensure that all threads complete sorting before proceeding to sort the other set of elements.
- - This parallelization aims to exploit the independent nature of sorting odd and even indexed
- elements, allowing for concurrent execution.
- **Explanation of the Code:**
- 1. **`bubble()` Function:** This function implements the traditional sequential bubble sort algorithm,
- which sorts an array in ascending order.
- 2. **`pBubble()` Function:** This function implements the parallel bubble sort. It sorts
- odd indexed elements and even indexed elements separately.
- - The outer loop iterates through the entire array.
- - The first inner loop (odd indexed elements) starts from index 1 and goes up
- to the second-to-last element (index `n - 2`) with a step of 2. It checks if the
- current element is smaller than the previous one and swaps them if necessary.
- - After sorting the odd indexed elements, a barrier (`#pragma omp barrier`) ensures
- synchronization among threads before sorting the even indexed elements.
- - The second inner loop (even indexed elements) starts from index 2 and goes up to
- the last element (index `n - 1`) with a step of 2. It performs the same swapping operation for even indexed elements.
- - The `#pragma omp for` directives are used to distribute loop iterations among available threads.
- 3. **`printArray()` Function:** This function simply prints the elements of an array.
- 4. **`main()` Function:** In the `main()` function:
- - An array of size `n` is initialized with numbers from `n` to 1 (in reverse order).
- - Sequential bubble sort is performed on the array, and the time taken is measured
- and printed along with the sorted array.
- - The array is reset to its original order.
- - Parallel bubble sort is performed on the array, and the time taken is measured and
- printed along with the sorted array.
- Overall, this code demonstrates a basic parallelization strategy for bubble sort using OpenMP,
- leveraging parallelism to potentially improve the sorting performance on multi-core systems.
- The time complexity of the parallel bubble sort algorithm remains the same as
- that of the sequential bubble sort algorithm in the worst-case scenario, which is
- 𝑂(𝑛^2).
- In bubble sort, regardless of whether it's executed sequentially or in parallel,
- each element in the array needs to be compared with every other element. In the worst case,
- this results in 𝑛 iterations of the outer loop, where n is the number of elements in the array.
- In each iteration of the outer loop, 𝑛−1 comparisons are made in the inner loop. Thus, the total
- number of comparisons made in the worst case is 𝑛 × ( 𝑛 − 1 ), which simplifies to 𝑂(𝑛^2).
- While parallelization can speed up the sorting process by allowing multiple comparisons to occur
- simultaneously, it does not change the fundamental nature of the algorithm or its time complexity.
- The time complexity remains quadratic because the number of comparisons scales with the square of
- the input size. Additionally, parallelization introduces overhead, synchronization, and
- load-balancing considerations, which may affect the actual runtime but do not change the theoretical time complexity.
- */
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement