Guest User

Untitled

a guest
Oct 23rd, 2018
160
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 68.30 KB | None | 0 0
  1.  
  2. Ex. No: 1
  3. STUDY OF ALL COMPUTING TECHNOLOGY
  4. Date:
  5.  
  6.  
  7. Aim:
  8. To study the different computing technologies like Distributed Computing, Ubiquitous computing, Cluster computing and Mobile computing.
  9. Client server computing:
  10. Client/Server computing is a computing model in which client and server computers communicate with each other over a network. In client/server computing, a server takes requests from client computers and shares its resources, applications and/or data with one or more client computers on the network, and a client is a computing device that initiates contact with a server in order to make use of a shareable resource.
  11.  
  12. Peer to Peer Computing:
  13. Peer-to-peer (P2P) is a decentralized communications model in which each party has the same capabilities and either party can initiate a communication session. Unlike the client/server model, in which the client makes a service request and the server fulfills the request, the P2P network model allows each node to function as both a client and server.
  14.  
  15.  
  16. Centralized Computing:
  17. Centralized computing is computing done at a central location, using terminals that are attached to a central computer. The computer itself may control all the peripherals directly (if they are physically connected to the central computer), or they may be attached via a terminalHYPERLINK "https://en.wikipedia.org/wiki/Terminal_server" HYPERLINK "https://en.wikipedia.org/wiki/Terminal_server"server. Alternatively, if the terminals have the capability, they may be able to connect to the central computer over the network. The terminals may be textHYPERLINK "https://en.wikipedia.org/wiki/Text_terminal" HYPERLINK "https://en.wikipedia.org/wiki/Text_terminal"terminals or thinHYPERLINK "https://en.wikipedia.org/wiki/Thin_client" HYPERLINK "https://en.wikipedia.org/wiki/Thin_client"clients. 
  18.  
  19. Parallel Computing:
  20. Parallel computing is a type of computing architecture in which several processors execute or process an application or computation simultaneously. Parallel computing helps in performing large computations by dividing the workload between more than one processor, all of which work through the computation at the same time. Most supercomputers employ parallel computing principles to operate.Parallel computing is also known as parallel processing.
  21.  
  22.  
  23. Distributed Computing:
  24. Distributed computing is a computing concept that, in its most general sense, refers to multiple computer systems working on a single problem. In distributed computing, a single problem is divided into many parts, and each part is solved by different computers. As long as the computers are networked, they can communicate with each other to solve the problem. If done properly, the computers perform like a single entity.
  25.  
  26. Grid Computing:
  27. Grid computing is a processor architecture that combines computer resources from various domains to reach a main objective. In grid computing, the computers on the network can work on a task together, thus functioning as a supercomputer.
  28.  
  29. Utility Computing:
  30. Utility computing is the process of providing computing service through an on-demand, pay-per-use billing method. Utility computing is a computing business model in which the provider owns, operates and manages the computing infrastructure and resources, and the subscriber’s accesses it as and when required on a rental or metered basis.
  31. Pervasive Computing (or) ubiquitous computing:
  32. Pervasive computing is an emerging trend associated with embedding microprocessors in day-to-day objects, allowing them to communicate information. It is also known as ubiquitous computing. The terms ubiquitous and pervasive signify "existing everywhere." Pervasive computing systems are totally connected and consistently available. It is also known as ubiquitous computing.
  33. Cluster Computing:
  34. Cluster Computing addresses the latest results in these fields that support High Performance Distributed Computing (HPDC). In HPDC environments, parallel and/or distributed computing techniques are applied to the solution of computationally intensive applications across networks of computers. The journal represents an important source of information for the growing number of researchers, developers and users of HPDC environments.
  35. Fog Computing:
  36. Fog computing or fog networking, also known as fogging, it is an architecture that uses edgeHYPERLINK "https://en.wikipedia.org/wiki/Edge_device" HYPERLINK "https://en.wikipedia.org/wiki/Edge_device"devices to carry out a substantial amount of computation, storage, communication locally and routed over the internetHYPERLINK "https://en.wikipedia.org/wiki/Internet_backbone" HYPERLINK "https://en.wikipedia.org/wiki/Internet_backbone"backbone, and most definitively has input and output from the physical world known as transduction. Fog computing consists of Edge nodes directly performing physical input and output often to achieve sensor input, display output, or full closed loop process control, and may also use smaller Edge Clouds often called as Cloudlets at the Edge or nearer to the Edge than centralized Clouds residing in very large data centres. The processing power in advanced Edge Clouds like those that control autonomous vehicles can be considerable compared to more traditional Edge personal devices such as mobile phones and personal computers.
  37. Edge Computing:
  38. "Edge computing" is used as a kind of catch-all for various networking technologies including peer-to-peer networking or ad hoc networking, as well as various types of cloud setups and other distributed systems. One other predominant type of edge networking is mobile edge networking or computing, an architecture that utilizes the edge of the cellular network for operations.
  39. Cloud Computing:
  40. Cloud computing is a general term for the delivery of hosted services over the internet. Cloud computing enables companies to consume a compute resource, such as a virtual machine , storage or an application, as a utility -- just like electricity -- rather than having to build and maintain computing infrastructures in house.
  41.  
  42.  
  43.  
  44.  
  45. Ambient Computing:
  46. Ambient computing is the backdrop of sensors, devices, intelligence, and agents that can put the Internet of Things to work. The Internet of Things (IoT) is maturing from its awkward adolescent phase.
  47. Internet Of Things (IOT):
  48. The Internet of Things (IoT) is the network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, actuators, and connectivity which enables these things to connect and exchange data, creating opportunities for more direct integration of the physical world into computer-based systems, resulting in efficiency improvements, economic benefits, and reduced human exertions. The number of IoT devices increased 31% year-over-year to 8.4 billion in 2017 and it is estimated that there will be 30 billion devices by 2020. The global market value of IoT is projected to reach $7.1 trillion by 2020.
  49. IoT involves extending internet connectivity beyond standard devices, such as desktops, laptops, smart phones and tablets, to any range of traditionally dumb or non-internet-enabled physical devices and everyday objects. Embedded with technology, these devices can communicate and interact over the internet, and they can be remotely monitored and controlled.
  50. Data Analytics:
  51. Data analysis is a process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, while being used in different business, science, and social science domains.
  52. Data Mining:
  53. Data mining is the process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems
  54. Data Warehouse:
  55. In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and dataHYPERLINK "https://en.wikipedia.org/wiki/Data_analysis" HYPERLINK "https://en.wikipedia.org/wiki/Data_analysis"analysis, and is considered a core component of businessHYPERLINK "https://en.wikipedia.org/wiki/Business_intelligence" HYPERLINK "https://en.wikipedia.org/wiki/Business_intelligence"intelligence. DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place that are used for creating analytical reports for workers throughout the enterprise.
  56. Web Services and Service Architecture:
  57. A service-oriented architecture is essentially a collection of services. These services communicate with each other. The communication can involve either simple data passing or it could involve two or more services coordinating some activity. Some means of connecting services to each other is needed.
  58.  
  59.  
  60.  
  61.  
  62. Service-oriented architectures are not a new thing. The first service-oriented architecture for many people in the past was with the use DCOM or Object Request Brokers (ORBs) based on the CORBA specification. For more on DCOM and CORBA, see PriorHYPERLINK "https://www.service-architecture.com/articles/web-services/prior_service-oriented_architecture_specifications.html" HYPERLINK "https://www.service-architecture.com/articles/web-services/prior_service-oriented_architecture_specifications.html"ServiceHYPERLINK "https://www.service-architecture.com/articles/web-services/prior_service-oriented_architecture_specifications.html"-HYPERLINK "https://www.service-architecture.com/articles/web-services/prior_service-oriented_architecture_specifications.html"OrientedHYPERLINK "https://www.service-architecture.com/articles/web-services/prior_service-oriented_architecture_specifications.html" HYPERLINK "https://www.service-architecture.com/articles/web-services/prior_service-oriented_architecture_specifications.html"ArchitecturesHYPERLINK "https://www.service-architecture.com/articles/web-services/prior_service-oriented_architecture_specifications.html"
  63.  
  64.  
  65.  
  66.  
  67.  
  68.  
  69.  
  70.  
  71.  
  72.  
  73.  
  74.  
  75.  
  76. Result:
  77. Thus the study experiment for different computing technologies was learned successfully.
  78.  
  79.  
  80. Ex. No: 2
  81. STUDY OF GRID & CLOUD COMPUTING
  82. Date:
  83.  
  84.  
  85. Aim:
  86. To study about grid and cloud computing.
  87. .Grid Computing:
  88. Grid computing is the collection of computer resources from multiple places to reach a common goal. The grid can be thought of as a distributedHYPERLINK "https://en.wikipedia.org/wiki/Distributed_system" HYPERLINK "https://en.wikipedia.org/wiki/Distributed_system"system with non-interactive workloads that involve a large number of files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application.
  89. Grid computers also tend to be more heterogeneous and geographically dispersed (thus not physically coupled) than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. 
  90.  
  91. Grid Architecture Layer:
  92. Grid computing is a distributed architecture of large numbers of computers connected to solve a complex problem. In the grid computing model, servers or personal computers run independent tasks and are loosely linked by the Internet or low-speed networks. Computers may connect directly or via scheduling systems.
  93.  
  94.  
  95.  
  96.  
  97. OGSA&OGSI Architecture:
  98.  
  99. Cloud Computing:
  100. Cloud computing is an informationHYPERLINK "https://en.wikipedia.org/wiki/Information_technology" HYPERLINK "https://en.wikipedia.org/wiki/Information_technology"technology (IT) paradigm that enables ubiquitous access to shared pools of configurable systemHYPERLINK "https://en.wikipedia.org/wiki/System_resource" HYPERLINK "https://en.wikipedia.org/wiki/System_resource"resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economiesHYPERLINK "https://en.wikipedia.org/wiki/Economies_of_scale" HYPERLINK "https://en.wikipedia.org/wiki/Economies_of_scale"ofHYPERLINK "https://en.wikipedia.org/wiki/Economies_of_scale" HYPERLINK "https://en.wikipedia.org/wiki/Economies_of_scale"scale, similar to a publicHYPERLINK "https://en.wikipedia.org/wiki/Public_utility" HYPERLINK "https://en.wikipedia.org/wiki/Public_utility"utility.
  101.  
  102. Types of Cloud Computing:
  103. Based on a cloud location, we can classify cloud as:
  104. • public,
  105. • private,
  106. • hybrid
  107. • community cloud
  108. Public cloud:
  109. The public cloud is defined as computing services offered by third-party providers over the public Internet, making them available to anyone who wants to use or purchase them. They may be free or sold on-demand, allowing customers to pay only per usage for the CPU cycles, storage or bandwidth they consume.
  110.  
  111. PRIVATE CLOUD:
  112. A private cloud is a cloud computing hardware and software platform that is dedicated to your organization. Private clouds hosted at Cybercon data center provides you the freedom to choose: network routers and switches, firewalls, server hardware, storage systems, and cloud computing software. Our solutions are designed and built to give you the power to construct and manage clouds across your internal data centers and Cybercon data centre on terms that you control. This means that you can keep a handle on compliance, security, and costs. And you can let your business needs drive your IT strategy, instead of having IT limit your options.
  113.  
  114. HYBRID CLOUD:
  115. Hybrid cloud is a cloud computing environment that uses a mix of on-premises, private cloud and third-party, public cloud services with orchestration between the two platforms. By allowing workloads to move between private and public clouds as computing needs and costs change, hybrid cloud gives businesses greater flexibility and more data deployment options.
  116.  
  117.  
  118. COMMUNITY CLOUD:
  119. A community cloud is a cloud service model that provides a cloud computing solution to a limited number of individuals or organizations that is governed, managed and secured commonly by all the participating organizations or a third party managed service provider.
  120.  
  121. Based on a service that the cloud is offering,
  122. • IaaS (Infrastructure-as-a-Service)
  123. • PaaS (Platform-as-a-Service)
  124. • SaaS (Software-as-a-Service)
  125. • or, Storage, Database, Information, Process, Application, Integration, Security, Management, Testing-as-a-service
  126. Infrastructure as a service:
  127. Infrastructure as a service (IaaS) refers to online services that provide high-level APIs used to dereference various low-level details of underlying network infrastructure like physical computing resources, location, data partitioning, scaling, security, backup etc. A hypervisor, such as Xen, Oracle VirtualBox, Oracle VM, KVM, VMware ESX/ESXi, or Hyper-V, LXD, runs the virtual machines as guests..
  128.  
  129.  
  130. Platform-as-a-Service:
  131. Platform as a Service (PaaS) or application platform as a Service (aPaaS) or platform base service is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app. PaaS can be delivered in three ways:
  132. • As a public cloud service from a provider, where the consumer controls software deployment with minimal configuration options, and the provider provides the networks, servers, storage, operating_system (OS), middleware (e.g. Java runtime, .NET runtime, integration, etc.), database and other services to host the consumer's application.
  133. • As a private service (software or appliance) behind a firewall.
  134. • As software deployed on a public infrastructure as a service.
  135.  
  136. Software-as-a-Service:
  137. Software as a service (SaaS /sæs/) is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted. It is sometimes referred to as "on-demand software", and was formerly referred to as "software plus services" by Microsoft. SaaS is typically accessed by users using a thin client via a web browser. SaaS has become a common delivery model for many business applications, including office software, messaging software, payroll processing software, DBMS software, management software, CAD_software,_development_software, gamification, virtualization, accounting, collaboration, customer_relationship_management (CRM). 
  138.  
  139.  
  140.  
  141. Merits:
  142. • Flexibility
  143. • Efficiency
  144. • Strategic value
  145. • Cost savings
  146. • Manageability
  147. Demerits:
  148. • Downtime
  149. • Security
  150. • Vendor Lock-In
  151. • Limited Control
  152.  
  153.  
  154.  
  155.  
  156.  
  157.  
  158.  
  159.  
  160.  
  161.  
  162.  
  163.  
  164.  
  165.  
  166.  
  167.  
  168.  
  169.  
  170.  
  171.  
  172. Result:
  173. Thus the study experiment for grid and cloud computing was learned successfully.
  174. Ex. No: 3
  175. EXECUTION OF LINUX COMMANDS IN UBUNTU
  176. Date:
  177.  
  178.  
  179. AIM:
  180. To study the linux commands and execute the linux commands in Ubuntu.
  181. LINUX COMMANDS:
  182. Linux is one of popular version of UNIX operating System. It is open source as its source code is freely available. It is free to use. Linux was designed considering UNIX compatibility. Its functionality list is quite similar to that of UNIX.
  183. Components of Linux System:
  184. Linux Operating System has primarily three components
  185. • Kernel − Kernel is the core part of Linux. It is responsible for all major activities of this operating system. It consists of various modules and it interacts directly with the underlying hardware. Kernel provides the required abstraction to hide low level hardware details to system or application programs.
  186. • System Library − System libraries are special functions or programs using which application programs or system utilities accesses Kernel's features. These libraries implement most of the functionalities of the operating system and do not requires kernel module's code access rights.
  187. • System Utility − System Utility programs are responsible to do specialized, individual level tasks.
  188. Kernel Mode vs User Mode:
  189. Kernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer. This code represents a single process, executes in single address space and do not require any context switch and hence is very efficient and fast. Kernel runs each processes and provides system services to processes, provides protected access to hardware to processes.
  190. Basic Commands:
  191. 1. Date Command:
  192. This command is used to display the current data and time.
  193. Syntax:
  194. $date
  195. 2. Calender Command:
  196. This command is used to display the calendar of the year or the particular month of calendar year.
  197. Syntax:
  198. a. $cal <year>
  199. b. $cal <month> <year>
  200. Here the first syntax gives the entire calendar for given year & the second Syntax gives the calendar of reserved month of that year.
  201. 3. Echo Command:
  202. This command is used to print the arguments on the screen.
  203. Syntax:
  204. $echo <text>
  205. 4. Banner Command:
  206. It is used to display the text in ‘#’ symbol .It displays the text in the form of a banner.
  207. Syntax :
  208. $banner <arguments>
  209. 5.’who’ Command:
  210. It is used to display who are the users connected to our computer currently.
  211. Syntax:
  212. $who
  213. 6.’who am i’ Command:
  214. Display the details of the current working directory.
  215. Syntax:
  216. $who am i
  217. 7.’tty’ Command:
  218. It will display the terminal name.
  219. Syntax:
  220. $tty
  221. 8.’CLEAR’ Command:
  222. It is used to clear the screen.
  223. Syntax:
  224. $clear
  225. 9.’MAN’ Command:
  226. It helps us to know about the particular command and its options & working. It is like ‘help’ command in windows.
  227. Syntax:
  228. $man <command name>
  229. 10. LIST Command:
  230. It is used to list all the contents in the current working directory.
  231. Syntax:
  232. $ ls – options <arguments>
  233. If the command does not contain any argument means it is working in the Current directory.
  234. Options:
  235. a– used to list all the files including the hidden files.
  236. c– list all the files columnwise.
  237. d- list all the directories.
  238. m- list the files separated by commas.
  239. p- list files include ‘/’ to all the directories.
  240. r- list the files in reverse alphabetical order.
  241. f- list the files based on the list modification date.
  242. x-list in column wise sorted order.
  243. DIRECTORY RELATED COMMANDS:
  244. 1. Present Working Directory Command:
  245. To print the complete path of the current working directory.
  246. Syntax:
  247. $pwd
  248. 2. MKDIR Command:
  249. To create or make a new directory in a current directory.
  250. Syntax:
  251. $mkdir <directory name>
  252. 3. CD Command:
  253. To change or move the directory to the mentioned directory .
  254. Syntax: $cd <directory name>.
  255. 4. RMDIR Command:
  256. To remove a directory in the current directory & not the current directory itself.
  257. Syntax:
  258. $rmdir <directory name>
  259.  
  260. FILE RELATED COMMANDS:
  261. 1. CREATE A FILE:
  262. To create a new file in the current directory we use CAT command.
  263. Syntax:
  264. $cat > filename.
  265. 2. DISPLAY A FILE:
  266. To display the content of file mentioned we use CAT command without ‘>’ operator.
  267. Syntax:
  268. $cat filename.
  269. 3. COPYING CONTENTS:
  270. To copy the content of one file with another. If file doesnot exist, a new file is created and if the file exists with some data then it is overwritten.
  271. Syntax :
  272. $ cat <source filename> >> <destination filename>
  273. 4. SORTING A FILE:
  274. To sort the contents in alphabetical order in reverse order.
  275. Syntax:
  276. $sort <filename >
  277. Option: $ sort –r <filename>
  278. 5. COPYING CONTENTS FROM ONE FILE TO ANOTHER:
  279. To copy the contents from source to destination file so that both contents are same.
  280. Syntax: $cp <source filename> <destination filename>
  281. $cp <source filename path > <destination filename path>
  282. 6. MOVE Command:
  283. To completely move the contents from source file to destination file and to remove the source file.
  284. Syntax:
  285. $ mv <source filename> <destination filename>
  286. 7. REMOVE Command:
  287. To permanently remove the file we use this command.
  288. Syntax:
  289. $rm <filename>
  290. 8. WORD Command:
  291. To list the content count of no of lines, words, characters.
  292. Syntax:
  293. $wc<filename>
  294. Options:
  295. -c – to display no of characters.
  296. -l – to display only the lines.
  297. -w – to display the no of words.
  298.  
  299. 9. LINE PRINTER:
  300. To print the line through the printer, we use lp command.
  301. Syntax: $lp <filename>
  302. 10. PAGE Command:
  303. This command is used to display the contents of the file page wise & next page can be viewed by pressing the enter key.
  304. Syntax:
  305. $pg <filename>
  306.  
  307. 11. FILTERS AND PIPES
  308. HEAD: It is used to display the top ten lines of file.
  309. Syntax: $head<filename>
  310.  
  311. TAIL: This command is used to display the last ten lines of file.
  312. Syntax: $tail<filename>
  313. PAGE: This command shows the page by page a screenfull of information is displayed after which the page command displays a prompt and passes for the user to strike the enter key to continue scrolling.
  314. Syntax: $ls –a\p
  315. MORE: It also displays the file page by page .To continue scrolling with more command, press the space bar key.
  316. Syntax: $more<filename>
  317. GREP: This command is used to search and print the specified patterns from the file. Syntax: $grep [option] pattern <filename>
  318. SORT: This command is used to sort the datas in some order.
  319. Syntax: $sort<filename>
  320. PIPE: It is a mechanism by which the output of one command can be channeled into the input of another command.
  321. Syntax: $who | wc-l
  322. TR: The tr filter is used to translate one set of characters from the standard inputs to another.
  323. Syntax: $tr “[a-z]” “[A-Z]”
  324. COMMUNICATION THROUGH UNIX COMMANDS
  325. Command: MESG
  326. Description: The message command is used to give permission to other users to send message to your terminal.
  327. Syntax: $mesg y
  328. Command: WRITE
  329. Description: This command is used to communicate with other users, who are logged in at the same time.
  330. Syntax: $write <user name>
  331. Command: WALL
  332. Description: This command sends message to all users those who are logged in using the unix server.
  333. Syntax: $wall <message>
  334. Command: MAIL
  335. Description: It refers to textual information, that can be transferred from one user to another
  336. Syntax: $mail <user name>
  337. Command: REPLY
  338. Description: It is used to send reply to specified user.
  339. Syntax: $reply<user name>
  340.  
  341.  
  342. vi EDITOR COMMANDS
  343. The vi editor is a visual editor used to create and edit text, files, documents and programs. It displays the content of files on the screen and allows a user to add, delete or change part of text. There are three modes available in the vi editor, they are
  344. 1. Command mode
  345. 2. Input (or) insert mode.
  346. The vi editor is invoked by giving the following commands in UNIX prompt.
  347. Syntax: $vi <filename> (or)$vi
  348. Options :
  349. 1.vi +n <filename> - this would point at the nth line (cursor pos).
  350. 2.vi –n <filename> - This command is to make the file to read only to change from one mode to another press escape key.
  351. INSERTING AND REPLACING COMMANDS:
  352. To move editor from command node to edit mode, you have to press the <ESC> key. For inserting and replacing the following commands are used.
  353. 1. ESC a Command:
  354. This command is used to move the edit mode and start to append after the current character.
  355. Syntax : <ESC> a
  356. 2. ESC A COMMAND :
  357. This command is also used to append the file , but this command append at the end of current line.
  358. Syntax: <ESC> A
  359. 3. ESC i Command:
  360. This command is used to insert the text before the current cursor position.
  361. Syntax: <ESC> i
  362. 4. ESC I Command:
  363. This command is used to insert at the beginning of the current line.
  364. Syntax: <ESC> I
  365. 5. ESC o Command:
  366. This command is insert a blank line below the current line & allow insertion of contents.
  367. Syntax: <ESC> o
  368. 6. ESC O Command:
  369. This command is used to insert a blank line above & allow insertion of contents.
  370. Syntax : <ESC> O
  371. 7. ESC r Command :
  372. This command is to replace the particular character with the given
  373. characters.
  374. Syntax: <ESC> rx Where x is the new character.
  375. 8. ESC R Command:
  376. This command is used to replace the particular text with a given text.
  377. Syntax: <ESC> R text
  378. 9. <ESC> s Command:
  379. This command replaces a single character with a group of character.
  380. Syntax: <ESC> s
  381. 10.<ESC> S Command :
  382. This command is used to replace a current line with group of characters. Syntax : <ESC> S
  383.  
  384.  
  385.  
  386.  
  387.  
  388.  
  389.  
  390.  
  391.  
  392.  
  393.  
  394.  
  395.  
  396.  
  397.  
  398.  
  399.  
  400.  
  401.  
  402.  
  403.  
  404.  
  405.  
  406.  
  407.  
  408.  
  409.  
  410.  
  411.  
  412.  
  413.  
  414.  
  415.  
  416.  
  417.  
  418.  
  419.  
  420.  
  421.  
  422.  
  423.  
  424.  
  425.  
  426.  
  427.  
  428.  
  429.  
  430.  
  431. RESULT:
  432. Thus the execution of linux commands in ubuntu environment was executed successfully.
  433.  
  434.  
  435.  
  436.  
  437.  
  438.  
  439.  
  440. Ex. No: 4
  441. Installation of C compiler in the virtual machine
  442. Date:
  443.  
  444. Aim:
  445. To install a C compiler in the virtual machine and execute some sample program in cloud environment.
  446. PROCEDURE:
  447. STEP 1: Click oracle VM Virtual Box (opened)
  448. STEP 2: Click File [Oracle Virtual Box Manager]
  449. STEP 3: Click Import Appliance
  450. Appliance to Import
  451.  
  452. Click browser (go to D: or E :)
  453. STEP 4: In D: or E:, Click Hadoop & open nebula folder
  454. STEP 5: In that, click ubuntu-grid (open)
  455. STEP 6: Click next
  456. STEP 7: Click import
  457. STEP 8:
  458. Import virtual disk image
  459. (It will take time to import)
  460. STEP 9: Click ubuntu *(click start)
  461. STEP 10: Click ok
  462. STEP 11:
  463. File machine view input devices help
  464. User name:
  465. Dinesh
  466. Password:
  467. 99425
  468.  
  469. STEP 12:
  470.  
  471.  
  472. Click here and type terminal
  473. Or
  474. Directly click terminal
  475.  
  476. STEP 13: To open a new file..Syntax: $vi filename.c
  477. STEP 14:
  478. ~
  479. ~
  480. Go to insert mode
  481. Click esc+i
  482. STEP 15: After typing C program press esc+:wq(to save & quit)
  483. STEP 16: cc Filename.c (to compile)
  484. STEP 17:./a.out(to run)
  485.  
  486. Program 1: Switch case
  487. #include<stdio.h>
  488. void main()
  489. {
  490. int a,b,ch;
  491. int c;
  492. printf("Enter two numbers\n");
  493. scanf("%d%d",&a,&b);
  494. do
  495. {
  496. printf("\nEnter your choice\n");
  497. scanf("%d",&ch);
  498. switch(ch)
  499. {
  500. case 1:
  501. printf("Addition is %d",a+b);
  502. break;
  503. case 2:
  504. printf("Subtraction is %d",a-b);
  505. break;
  506. case 3:
  507. printf("Multiplication is %d",a*b);
  508. break;
  509. case 4:
  510. printf ("Division is %d", a/b);
  511. break;
  512. default:
  513. printf ("Enter valid choice");
  514. break;
  515. }
  516. printf("Do you want to continue y/n");
  517. scanf("%d",&c);
  518. }while(c==0);
  519. }
  520. output:
  521. Enter two numbers
  522. 3
  523. 2
  524. Enter your choice
  525. 1
  526. Addition is 5
  527. Do you want to continue 0/1
  528. 0
  529. Enter your choice
  530. 5
  531. Enter valid choice
  532. Do you want to continue 0/1
  533. 1
  534.  
  535.  
  536. program2 : Armstrong Number
  537. #include<stdio.h>
  538. void main()
  539. {
  540. int n,a=0,r=0,rf=0;
  541. printf("Enter the number\n");
  542. scanf("%d",&n);
  543. a=n;
  544. while(n!=0)
  545. {
  546. r=n%10;
  547. rf+=r*r*r;
  548. n=n/10;
  549. }
  550. if(rf==a)
  551. printf("entered number is armstrong number");
  552. else
  553. printf("entered number is not an armstrong number");
  554. }
  555. Output:
  556. Enter the number
  557. 153
  558. entered number is armstrong number
  559.  
  560.  
  561. program 3: Factorial
  562. #include<stdio.h>
  563. void main()
  564. {
  565. int n,f=1,i;
  566. printf("Enter the number\n");
  567. scanf("%d",&n);
  568. for(i=1;i<=n;i++)
  569. f=f*i;
  570. printf("factorial of %d is %d",n,f);}
  571. output:
  572. Enter the number:4
  573. factorial of 4 is 24
  574.  
  575.  
  576.  
  577.  
  578.  
  579. Program 4: Addition of two numbers
  580. include<stdio.h>
  581. void main()
  582. {
  583. int a,b,c;
  584. printf("enter two numbers");
  585. scanf("%d%d",&a,&b);
  586. c=a+b;
  587. printf("Sum of a and b is %d",c);
  588. }
  589.  
  590. Output:
  591. Enter two numbers
  592. 23
  593. 27
  594. Sum of a and b is 50
  595.  
  596.  
  597. Program 5: Palindrome
  598. #include<stdio.h>
  599. void main()
  600. {
  601. printf("Enter the number");
  602.  
  603. int n,a=0,r=0,rf=0;
  604. scanf("%d",&n);
  605. a=n;
  606. while(n!=0)
  607. {
  608. r=n%10;
  609. rf=rf*10+r;
  610. n=n/10;
  611. }
  612. if(rf==a)
  613. printf("Entered number is palindrome");
  614. else
  615. printf("Entred number is not palindrome");
  616. }
  617.  
  618. Output:
  619. Enter the number
  620. 33
  621. Entered number is palindrome
  622.  
  623.  
  624. Program 6: Multiplication Table
  625. #include<stdio.h>
  626. void main()
  627. {
  628. int m,n,i;
  629. printf("Enter which table you want");
  630. scanf("%d",&m);
  631. printf("Enter the range of the table");
  632. scanf("%d",&n);
  633. for(i=1;i<=n;i++)
  634. printf("%d x %d = %d \n",i,m,i*m);
  635. }
  636. Output:
  637. Enter which table you want
  638. 2
  639. Enter the range of the table
  640. 3
  641. 1 x 2 = 2
  642. 2 x 2 = 4
  643. 3 x 2 = 6
  644.  
  645.  
  646.  
  647.  
  648.  
  649.  
  650.  
  651.  
  652.  
  653.  
  654.  
  655.  
  656.  
  657.  
  658.  
  659.  
  660.  
  661. Result:
  662. Thus C compiler is installed in virtual machine and some sample programs are executed and verified in cloud environment.
  663.  
  664. Ex. No: 5
  665. RUNNING VM’S OF DIFFERENT CONFIGURATION
  666. IN OPEN NEBULA
  667. Date:
  668.  
  669. AIM:
  670. To find procedure to run the virtual machine of different configuration and to check how
  671. many virtual machines can be utilized at particular time.
  672. PROCEDURE:
  673. Install Open nebula sandbox:
  674. 1. Open Virtual box
  675. 2. File import Appliance
  676. 3. Browse OpenNebula-Sandbox-5.0.ova file
  677. 4. Then go to setting, select Usb and choose USB 1.1
  678. 5. Then Start the Open Nebula
  679. 6. Login using username: root, password:opennebula
  680.  
  681.  
  682.  
  683. Procedure to run the virtual machine of different configuration and to check how many
  684. virtual machines can be utilized at particular time:
  685. 1. Open Browser, type localhost:9869
  686. 2. Login using username: oneadmin, password: opennebula
  687. 3. Click on instances, select VMs then follow the steps to create Virtaul machine
  688. a. Expand the + symbol
  689. b. Select user oneadmin
  690. c. Then enter the VM name,no.of instance, cpu.
  691. d. Then click on create button.
  692. e. Repeat the steps the C,D for creating more than one VMs.
  693.  
  694.  
  695.  
  696.  
  697.  
  698.  
  699.  
  700.  
  701.  
  702.  
  703.  
  704.  
  705.  
  706.  
  707.  
  708.  
  709.  
  710.  
  711.  
  712.  
  713.  
  714.  
  715.  
  716.  
  717.  
  718.  
  719.  
  720. RESULT:
  721. Thus to find procedure to run the virtual machine of different configuration and to check how many virtual machines can be utilized at particular time using open nebula is done and verified.
  722.  
  723.  
  724.  
  725.  
  726. Ex. No: 6
  727. PROCEDURE TO ATTACH VIRTUAL BLOCK TO THE VM
  728. Date:
  729.  
  730. AIM:
  731. To find procedure to attach virtual block to the virtual machine and to check whether it holds the data even after the release of the virtual machine.
  732. PROCEDURE:
  733. Method 1:
  734. 1. Open the virtual box
  735. 2. Power off the VM which you want to add virtual box
  736. 3. Then right click on that VM,select setting
  737. 4. Then click on storage,find controller IDE .
  738. 5. In the top right find add hard disk icon, the pop up window display
  739. 6. On that window select create new disk, and then click next and next then finish.
  740. 7. Then find attributes icon ,hard disk as IDE secondary slave.
  741.  
  742.  
  743.  
  744.  
  745.  
  746.  
  747.  
  748.  
  749.  
  750.  
  751.  
  752.  
  753.  
  754.  
  755.  
  756.  
  757.  
  758.  
  759.  
  760.  
  761. Method 2:
  762. 1. Open Browser, type localhost:9869
  763. 2. Login using username: oneadmin, password: opennebula
  764. 3. Click on instances, select VMs then follow the steps to add virtual block
  765. a. Select any one VM from the list and power off the VM
  766. b. Then click on that VM ,find the storage tab then click on that
  767. c. Then find the attach disk button
  768. d. Click on that button ,the new pop window display
  769. e. On that window select either image or volatile disk
  770. f. Click on attach button.
  771.  
  772.  
  773.  
  774.  
  775.  
  776.  
  777.  
  778.  
  779.  
  780.  
  781.  
  782.  
  783.  
  784.  
  785.  
  786.  
  787.  
  788.  
  789.  
  790. RESULT:
  791. Thus to find procedure to attach virtual block to the virtual machine and to check whether it holds the data even after the release of the virtual machine is executed and verified.
  792.  
  793. Ex. No: 7
  794. The Virtual Machine Migration
  795. Date:
  796.  
  797.  
  798. AIM:
  799. Learning the procedure to migrate the virtual machine from one host to another, perform the virtual machine migration and show the virtual machine migration based on certain condition from one node to the other.
  800. PROCEDURE:
  801. 1. Open Browser, type localhost:9869
  802. 2. Login using username: oneadmin, password: opennebula
  803. 3. Then follow the steps to migrate VMs
  804. a. Click on infrastructure
  805. b. Select clusters and enter the cluster name
  806. c. Then select host tab, and select all host
  807. d. Then select Vnets tab, and select all vnet
  808. e. Then select data stores tab, and select all data stores
  809. f. And then choose host under infrastructure tab
  810. g. Click on + symbol to add new host, name the host then click on create.
  811. 4. on instances, select VMs to migrate then follow the steps
  812. a. Click on 8th icon ,the drop down list display
  813. b. Select migrate on that ,the popup window display
  814. c. On that select the target host to migrate then click on migrates.
  815. Before migration
  816. Host: naveenkumar
  817.  
  818.  
  819.  
  820.  
  821.  
  822.  
  823. Host:one-sandbox
  824.  
  825.  
  826.  
  827.  
  828.  
  829. After Migration:
  830.  
  831.  
  832.  
  833.  
  834.  
  835. Host:one-sandbox
  836.  
  837. Host:naveenkumar
  838.  
  839.  
  840.  
  841.  
  842.  
  843.  
  844.  
  845. RESULT:
  846. The procedure of creating the host with an existing image file and migrating the virtual machine is practiced and recorded through this experiment.
  847.  
  848. Ex.No: 8 Develop a new Web Service for Calculator
  849. Date:
  850. Aim:
  851. To develop a web service program for a calculator.
  852. Procedure:
  853. Step1. Open netbeans and go to New
  854.  
  855.  
  856.  
  857.  
  858.  
  859.  
  860.  
  861.  
  862.  
  863.  
  864.  
  865.  
  866.  
  867. Step 2. Choose Java Web and select Web Application and give next.
  868.  
  869.  
  870.  
  871.  
  872.  
  873.  
  874.  
  875.  
  876.  
  877.  
  878.  
  879.  
  880.  
  881.  
  882.  
  883.  
  884. Step 3. Enter the project name and give next and Select the Server either tomcat or glas
  885.  
  886.  
  887.  
  888.  
  889.  
  890.  
  891.  
  892.  
  893.  
  894. Step 4. Give next and select finish
  895. Step 5. Right click the WebApplication (Projec Name) and Select New, and choose Java Class
  896.  
  897.  
  898.  
  899.  
  900.  
  901.  
  902.  
  903.  
  904.  
  905. Step 6. Type the following code
  906. import javax.jws.WebMethod;
  907. import javax.jws.WebParam;
  908. import javax.jws.WebService;
  909.  
  910. @WebService(serviceName="MathService", targetNamespace = "http://my.org/ns/") public class MathService {
  911. @WebMethod(operationName = "hello"
  912. public String hello(@WebParam(name="name")String txt){ return "Hello"+txt+"!";
  913. }
  914. @WebMethod(operationName = "addSer"
  915. public String addSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2){
  916. return "Answer:" +(v1+v2)+"!";
  917. }
  918. @WebMethod(operationName = "subSer")
  919. public String subSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2){
  920. return "Answer:" +(v1-v2)+"!";
  921. }
  922. @WebMethod(operationName = "mulSer")
  923. public String mulSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2){
  924. return "Answer:" +(v1*v2)+"!";
  925. }
  926. @WebMethod(operationName = "divSer")
  927. public String divSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2){float res= 0; try
  928. {res = ((float)v1)/((float) v2); return "Answer:" +res+"!";
  929. }
  930. catch(ArithmeticException e){ System.out.println("Can't be divided by Zero"+e); return "Answer:" +e.getMessage().toString()+"!!!";
  931. }
  932. }
  933. }
  934. Step 7. Run Project by pressing F6 key or Run button.
  935. Step 8. Check Web browser
  936. for the following name is available else give it http://localhost:8080/Web pplication2/Math ervice?Tester
  937. MathService?Tester ---> represents the java class name
  938.  
  939.  
  940.  
  941.  
  942.  
  943.  
  944.  
  945.  
  946.  
  947.  
  948. Output Screen:
  949. Give some value in the fields and check the out put by pressing enter key.
  950.  
  951.  
  952.  
  953.  
  954.  
  955.  
  956.  
  957.  
  958.  
  959.  
  960.  
  961.  
  962.  
  963.  
  964.  
  965.  
  966.  
  967.  
  968.  
  969.  
  970.  
  971.  
  972.  
  973. Finally select the WSDL link
  974.  
  975.  
  976.  
  977.  
  978.  
  979.  
  980.  
  981.  
  982.  
  983.  
  984.  
  985.  
  986.  
  987.  
  988.  
  989.  
  990.  
  991.  
  992.  
  993.  
  994.  
  995.  
  996.  
  997.  
  998.  
  999. PREPARATION
  1000. 30
  1001.  
  1002. PERFORMANCE
  1003. 30
  1004.  
  1005. RECORD
  1006. 40
  1007.  
  1008. TOTAL
  1009. 100
  1010.  
  1011.  
  1012. Result:
  1013. Thus the program on calculator for web services is executed successfully.
  1014. Ex.No: 9 Implementation Of OGSA using windows
  1015. Date:
  1016.  
  1017.  
  1018. Aim:
  1019. To develop a new OGSA- Compliant web service using windows.
  1020.  
  1021. Procedure:
  1022. I . Setup the Development Environment
  1023.  
  1024. 1.1. First you need to set up the development environment. Following things are needed if you want to create Web Services using Axis2 and Eclipse IDE. 
  1025.  
  1026. Some Eclipse versions have compatibility issues with Axis2. This tutorial is tested with Apache Axis2 1.5.2, Eclipse Helios and Apache Tomcat 6.
  1027.  
  1028. 1) Apache Axis2 Binary Distribution - Download 
  1029. 2) Apache Axis2 WAR Distribution - Download 
  1030. 3) Apache Tomcat - Download 
  1031. 4) Eclipse IDE – Download 
  1032. 5) Java installed in your Computer – Download 
  1033.  
  1034. 1.2. Then you have to set the environment variables for Java and Tomcat. There following variables should be added.
  1035. 1
  1036. 2
  1037. 3
  1038. JAVA_HOME :- Set the value to jdk directory (e.g. C:\Program Files\Java\jdk1.6.0_21)
  1039. TOMCAT_HOME :- Set the value to top level directory of your Tomcat install (e.g. D:\programs\apache-tomcat-6.0.29)
  1040. PATH :- Set the value to bin directory of your jdk (e.g. C:\Program Files\Java\jdk1.6.0_21\bin)
  1041. 1.3. Now you have to add runtime environment to eclipse. There go to Windows –-> Preferences and Select the Server --> Runtime Environments.
  1042.  
  1043.  
  1044.  
  1045. There select Apache Tomcat v6.0 and in the next window browse your Apache installation directory and click finish.
  1046.  
  1047.  
  1048.  
  1049. 1.4. Then click on the Web Service –-> Axis2 Preferences and browse the top level directory of Apache Axis2 Binary Distribution.
  1050.  
  1051.  
  1052.  
  1053. II. Creating the Web Service Using Bottom-Up Approach
  1054.  
  1055. 2.1 First create a new Dynamic Web Project (File --> New –-> Other…) and choose Web --> Dynamic Web Project.
  1056.  
  1057.  
  1058.  
  1059. 2.2 Set Apache Tomcat as the Target Runtime and click Modify to install Axis2 Web Services project facet.
  1060.  
  1061.  
  1062.  
  1063. 2.3 Select Axis2 Web Services
  1064.  
  1065.  
  1066.  
  1067. 2.4 Click OK and then Next. There you can choose folders and click Finish when you are done.
  1068.  
  1069. III. Create Web Service Class
  1070.  
  1071. Now you can create a Java class that you would want to expose as a Web Service. I’m going to create new class called FirstWebService and create public method called addTwoNumbers which takes two integers as input and return the addition of them.
  1072.  
  1073. 3.1 Right Click on MyFirstWebService in Project Explorer and select New –-> Class and give suitable package name and class name. I have given com.sencide as package name and FirstWebService as class name.
  1074.  
  1075.  
  1076. package com.sencide;
  1077. public class FirstWebService {
  1078. public int addTwoNumbers(int firstNumber, int secondNumber){
  1079. return firstNumber + secondNumber;
  1080. }
  1081. }
  1082. 3.2 Then, select File --> New –-> Other and choose Web Service.
  1083.  
  1084.  
  1085.  
  1086. 3.3 Select the FirstWebService class as service implementation and to make sure that the Configuration is setup correctly click on Server runtime.
  1087.  
  1088.  
  1089.  
  1090. 3.4 There set the Web Service runtime as Axis2 (Default one is Axis) and click Ok.
  1091.  
  1092.  
  1093.  
  1094. 3.5 Click Next and make sure Generate a default service.xml file is selected. 
  1095.  
  1096.  
  1097.  
  1098. 3.6 Click Next and Start the Server and after server is started you can Finish if you do not want to publish the Web service to a test UDDI repository.
  1099.  
  1100.  
  1101.  
  1102. You can go to http://localhost:8888/MyFirstWebService/services/listServices to see your running service which is deployed by Axis2. You can see the WSDL by clicking the link FirstWebService.
  1103.  
  1104.  
  1105.  
  1106.  
  1107.  
  1108.  
  1109.  
  1110.  
  1111. Result:
  1112. Thus a new OGSA- Compliant web service is developed and verified using windows .
  1113. Ex.No:10 To develop a new OGSA complaint Webservice.(Ubuntu-grid)
  1114. Date:
  1115. Aim:
  1116. To develop a new OGSA complaint Webservice
  1117. Procedure:
  1118. Step 1: Choose New Project from the main menu
  1119.  
  1120.  
  1121.  
  1122.  
  1123.  
  1124.  
  1125.  
  1126.  
  1127. Step 2: Select POM project from the maven category
  1128.  
  1129.  
  1130.  
  1131.  
  1132.  
  1133.  
  1134.  
  1135. Step 3: Type MavenOSGiCDIProject as the project name and click finish. When you click finish, the IDE creates the POM project and opens the project in the project window.
  1136.  
  1137.  
  1138.  
  1139.  
  1140.  
  1141.  
  1142.  
  1143. Step 4: Expand the project files node in the project window and double – click pom.xml to open the file in editor and do the modification in the file and save.
  1144. In pom.xml file
  1145. <?xml version="1.0" encoding="UTF-8"?>
  1146. <project xmlns="http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns:xsi="http://www.w3.org/2001/XML chema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mycompany</groupId>
  1147. <artifactId>MavenO GiCDIProject</artifactId> <version>1.0-SN PSHOT</version> <packaging>pom</packaging><properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencyManagement><dependencies><dependency> <groupId>org.osgi</groupId>
  1148. <artifactId>org.osgi.core</artifactId>
  1149. <version>4.2.0</version>
  1150. <scope>provided</scope>
  1151. </dependency></dependencies></dependencyManagement>
  1152. </project>
  1153. Step 5:Creating OGSi Bundle Projects
  1154. Choose File -> New Project to open the New Project Wizzard
  1155. Step 6 : Choose OGSI Bundle from Maven category. Click Next
  1156.  
  1157.  
  1158.  
  1159.  
  1160.  
  1161.  
  1162.  
  1163.  
  1164. Step 7: Creating MavenHelloService pi as the Project Name for OGSi Bundle
  1165.  
  1166.  
  1167.  
  1168.  
  1169.  
  1170.  
  1171.  
  1172.  
  1173.  
  1174. The IDE creates the bundle project and opens the project in the Project Window. And check thebuilding pugins at pom.xml under project files.
  1175.  
  1176.  
  1177.  
  1178.  
  1179.  
  1180.  
  1181.  
  1182. As well as it will create org.osgi.core artifact as default and it can be view at under Dependencies.
  1183. Step 7: Buid the MavenHello ervice pi Project by
  1184.  
  1185. 1. Right Click the MavenHello ervice pi project node in the project window and choose properties.
  1186.  
  1187.  
  1188.  
  1189.  
  1190.  
  1191.  
  1192. 2. Select the source category in the project project dialog box
  1193. 3. Set the Source/Binary Format to 1.6 and confirm that the Encoding is UTF-8 and click ok
  1194.  
  1195.  
  1196.  
  1197.  
  1198.  
  1199.  
  1200. 4. Right click the source package node in the project window and choose New -> JavaInterface
  1201. 5. Type Hello for the Class Name
  1202.  
  1203.  
  1204.  
  1205.  
  1206.  
  1207.  
  1208.  
  1209.  
  1210.  
  1211. 6. Select com.mycompany.mavenhelloserviceapi as the Package. Click finish.
  1212. 7. Add the following sayHello method to the interface and save the changes.
  1213. package com.mycompany.mavenhelloserviceapi;
  1214. public interface Hello {
  1215. String sayHello(String name);
  1216. }
  1217.  
  1218.  
  1219.  
  1220.  
  1221.  
  1222.  
  1223.  
  1224.  
  1225.  
  1226.  
  1227.  
  1228.  
  1229.  
  1230.  
  1231. 8. Right click the project node in the project window and choose build.
  1232.  
  1233. 9. After building the project, open files window and expand the project node such that you can see MavenHelloServiceApi-1.0-SN PSHOT.jar is created in the target folder.
  1234.  
  1235.  
  1236.  
  1237.  
  1238.  
  1239.  
  1240.  
  1241. Step 8: Creating the MavenHelloServiceImpl Implementation Bundle
  1242. Here you will create the MavenHelloServiceImpl in the POM Project.
  1243. 1. Choose File -> New Project to open the New Project Wizard
  1244. 2. Choose OSGi Bundle from the Maven category. Click Next.
  1245. 3. Type MavenHelloServiceImpl for the Project Name
  1246. 4. Click Browse and select the MavenOSGiCDIProject POM project as the Location. Click Finish.(As earlier step).
  1247. 5. Right click the project node in the Projects window and choose Properties.
  1248. 6. Select the Sources category in the Project Properties dialog box.
  1249. 7. Set the Source/Binary Format to 1.6 and confirm that the Encoding is UTF-8. Click OK.
  1250. 8. Right click Source Packages node in the Projects window and choose New -> Java Class.
  1251. 9. Type HelloImpl for the Class Name.
  1252. 10. Select com.mycompany.mavenhelloserviceimpl as the Package. Click Finish.
  1253. 11. Type the following and save your changes
  1254. package com.mycompany.mavenhelloserviceimpl;
  1255. public class HelloImpl implements Hello { public String sayHello(String name){
  1256. return "Hello" +name;
  1257. }}
  1258. When you implement Hello, the IDE will display an error that you need to resolve by adding the MavenHelloServiceApi project as a dependency.
  1259. 12. Right click the Dependencies folder of MavenHelloServiceImpl in the Projects window and choose Add Dependency.
  1260. 13. Click the Open Projects tab in the dd Library dialog. 14. Select MavenHello ervice pi O Gi Bundle. Click dd
  1261.  
  1262.  
  1263.  
  1264.  
  1265.  
  1266.  
  1267.  
  1268.  
  1269.  
  1270.  
  1271.  
  1272. 14. Expand the com.mycompany.mavenhelloserviceimpl package and double click Activator.java and open the file in editor.
  1273.  
  1274. The IDE automatically creates the Activator.java bundle and its manage the lifecycle of bundle. By default it includes start() and stop().Modify the start() and Stop() methods in the bundle activator class by adding thefollowing lines.
  1275. package com.mycompany.mavenhelloserviceimpl;
  1276. import org.osgi.framework.BundleActivator;
  1277. import org.osgi.framework.BundleContext;
  1278. public class Activator implements BundleActivator {
  1279. public void start(BundleContext context) throws Exception { // TODO add activation code here
  1280. System.out.println("HelloActivator::start"); context.registerService(Hello.class.getName(),new HelloImpl(),null); System.out.println("HelloActivator::registration of Hello Service Successfull");
  1281. }
  1282. public void stop(BundleContext context) throws Exception { // TODO add deactivation code here
  1283. context.ungetService(context.getServiceReference(Hello.class.get ame())); System.out.println("HelloActivator stopped");
  1284. }
  1285. }
  1286.  
  1287.  
  1288.  
  1289.  
  1290.  
  1291.  
  1292.  
  1293.  
  1294. Step 9: Bulding and Deploying the OSGi Bundles
  1295. Here you will build the OSGi bundles and deploy the bundles to GlassFish
  1296. 1. Right click the MavenOSGiCDIProject folder in the Projects window and choose Clean and Build.
  1297. ## When you build the project the IDE will create the JAR files in the target folder and aslo install the snapshot JAR in the local repository.
  1298. ## In file window, by expanding the target folder of each of the two bundle projects it will show two JAR archieves(MavenHelloServiceApi-1.0-SNAPSHOT.jar and MavenHElloServiceImpl-1.0-SNAPSHOT.jar.)
  1299.  
  1300.  
  1301.  
  1302.  
  1303.  
  1304.  
  1305.  
  1306. 2. Start the GlassFish server (if not already started)
  1307. 3. Copy the MavenHelloService pi-1.0-SN PSHOT.jar to the /home/linux/glassfish-4.1.1/glassfish/domains/domain1/autodeploy/bundles ( GlassFish installed Directory)
  1308. 4. You can see output similar to the following in the GlassFish Server log in the output window.
  1309.  
  1310.  
  1311.  
  1312.  
  1313.  
  1314.  
  1315.  
  1316.  
  1317.  
  1318. Info: Installed /home/linux/glassfish-4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-1.0-SNAPSHOT.jar
  1319. Info: Started bundle: file:/home/linux/glassfish-
  1320. 4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-1.0-SNAPSHOT.jar
  1321. Info: Started bundle: file:/home/linux/glassfish-4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-1.0-SNAPSHOT.jar
  1322. 5. Repeat the step of copying the MavenHelloServiceImpl-1.0-SNAPSHOT.jar to the/home/linux/glassfish-4.1.1/glassfish/domains/domain1/autodeploy/bundles
  1323. ( GlassFish installed Directory)
  1324. 6. You can see the output athe glassfish server log
  1325.  
  1326.  
  1327.  
  1328.  
  1329.  
  1330.  
  1331.  
  1332. Info: Installed /home/linux/glassfish 4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceImpl-1.0-SNAPSHOT.jarInfo: Hello ctivator::start
  1333. Info: Hello ctivator::registration of Hello ervice uccessfull Info: Started bundle: file:/home/linux/glassfish-4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceImpl-1.0-SN PSHOT.jar
  1334. Info: Started bundle: file:/home/linux/glassfish-4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceImpl-1.0-SN PSHOT.jar
  1335.  
  1336.  
  1337.  
  1338.  
  1339.  
  1340.  
  1341.  
  1342.  
  1343.  
  1344.  
  1345.  
  1346.  
  1347.  
  1348.  
  1349.  
  1350.  
  1351.  
  1352.  
  1353. Result:
  1354. Thus a new OGSA- complaint web service has been executed successfully.
  1355.  
  1356. Ex.No:11 To develop a Grid Service using Apache Axis
  1357. Date:
  1358.  
  1359. Aim:
  1360. To develop a Grid Service using Apache Axis
  1361. Procedure:
  1362. Using Apache Axis develop a Grid Service.
  1363. 1. Open the terminal
  1364. 2. Type cd /opt/axis2/axis2-1.7.3/bin then press enter
  1365. 3. Type chmod 500 axis2server.sh
  1366. 4. Type ./axis2server.sh
  1367. 5. Then open browser on ubuntu type the URL as localhost:8080/axis2/services
  1368.  
  1369.  
  1370.  
  1371.  
  1372.  
  1373.  
  1374.  
  1375.  
  1376.  
  1377.  
  1378.  
  1379.  
  1380.  
  1381.  
  1382.  
  1383.  
  1384.  
  1385.  
  1386.  
  1387.  
  1388.  
  1389.  
  1390.  
  1391.  
  1392.  
  1393.  
  1394.  
  1395.  
  1396.  
  1397.  
  1398.  
  1399.  
  1400. Result:
  1401. Thus Grid Service using Apache Axis has been executed successfully.
  1402.  
  1403. Ex.No:12 Develop applications using Java or C/C++ Grid APIs
  1404. Date:
  1405. Aim:
  1406. To Develop applications using Java or C/C++ Grid APIs
  1407. Procedure:
  1408. Develop applications using Java or C/C++ Grid APIs
  1409.  
  1410. a. Open the terminal
  1411. b. Type cd /opt/axis2/axis2-1.7.3/bin then press enter
  1412. c. gedit hello.c
  1413. d. gcc hello.c
  1414. e. ./a.out
  1415. Program 1: Switch case
  1416. #include<stdio.h>
  1417. void main()
  1418. {
  1419. int a,b,ch;
  1420. int c;
  1421. printf("Enter two numbers\n");
  1422. scanf("%d%d",&a,&b);
  1423. do
  1424. {
  1425. printf("\nEnter your choice\n");
  1426. scanf("%d",&ch);
  1427. switch(ch)
  1428. {
  1429. case 1:
  1430. printf("Addition is %d",a+b);
  1431. break;
  1432. case 2:
  1433. printf("Subtraction is %d",a-b);
  1434. break;
  1435. case 3:
  1436. printf("Multiplication is %d",a*b);
  1437. break;
  1438. case 4:
  1439. printf("Division is %d",a/b);
  1440. break;
  1441. default:
  1442. printf("Enter valid choice");
  1443. break;
  1444. }
  1445. printf("Do you want to continue y/n");
  1446. scanf("%d",&c);
  1447. }while(c==0);
  1448. }
  1449.  
  1450. output:
  1451. Enter two numbers
  1452. 3
  1453. 2
  1454. Enter your choice
  1455. 1
  1456. Addition is 5
  1457. Do you want to continue 0/1
  1458. 0
  1459. Enter your choice
  1460. 5
  1461. Enter valid choice
  1462. Do you want to continue 0/1
  1463. 1
  1464.  
  1465.  
  1466.  
  1467.  
  1468. program2 : Armstrong Number
  1469.  
  1470. #include<stdio.h>
  1471. void main()
  1472. {
  1473. int n,a=0,r=0,rf=0;
  1474. printf("Enter the number\n");
  1475.  
  1476. scanf("%d",&n);
  1477. a=n;
  1478. while(n!=0)
  1479. {
  1480. r=n%10;
  1481. rf+=r*r*r;
  1482. n=n/10;
  1483. }
  1484. if(rf==a)
  1485. printf("entered number is armstrong number");
  1486. else
  1487. printf("entered number is not an armstrong number");
  1488. }
  1489. output:
  1490. Enter the number
  1491. 153
  1492. entered number is armstrong number
  1493.  
  1494.  
  1495.  
  1496.  
  1497.  
  1498.  
  1499.  
  1500.  
  1501.  
  1502. Result:
  1503. Thus applications using Java or C/C++ Grid APIs has been executed successfully.
  1504. Ex.No:13 Develop secured applications using basic security mechanisms
  1505. Date:
  1506.  
  1507. Aim:
  1508. To develop secured applications using basic security mechanisms available in Globus Toolkit
  1509. Procedure:
  1510. Develop secured applications using basic security mechanisms available in Globus
  1511. Toolkit.
  1512. 1. Follow these command to install basic security
  1513. Installing GRID Essential
  1514. wget http://www.globus.org/ftppub/gt6/installers/repo/globus-toolkit-repo_latest_all.deb
  1515. sudo dpkg -i globus-toolkit-repo_latest_all.deb
  1516. sudo apt-get update
  1517. sudo apt-get install globus-data-management-client
  1518. sudo apt-get install globus-gridftp
  1519. sudo apt-get install globus-gram5
  1520. sudo apt-get install globus-gsi
  1521. sudo apt-get install globus-data-management-server
  1522. sudo apt-get install globus-data-management-client
  1523. sudo apt-get install globus-data-management-sdk
  1524. sudo apt-get install globus-resource-management-server
  1525. sudo apt-get install globus-resource-management-client
  1526. sudo apt-get install globus-resource-management-sdk
  1527. sudo apt-get install myproxy
  1528. sudo apt-get install gsi-openssh
  1529. sudo apt-get install globus-gridftp globus-gram5 globus-gsi myproxy myproxy-server myproxy-admin
  1530. 2. After installing like myproxy,gsi-openssh and Globus GRAM file
  1531. a. Click the filecomputer
  1532. b. Then Search Grid security folder
  1533. c. Then see the gsi.conf, sshftp.( This indicate the basic security mechanisms are configured)
  1534.  
  1535. Result:
  1536. Thus secured applications using basic security mechanisms available in Globus Toolkit
  1537. has been executed successfully.
  1538. Ex. No: 14 Find procedure to set up the one node Hadoop cluster.
  1539. Date:
  1540.  
  1541. AIM:
  1542. To Set up the one node Hadoop cluster.
  1543.  
  1544. PRE-REQUISITE:
  1545. 1. Java v1.8 installation
  1546. 2. Configuring SSH access. sudo apt-get install vim
  1547. This exercise has been created for following environment:
  1548. • Ubuntu Linux 64-bit
  1549. • JDK 1.8.0_05
  1550. • Hadoop 2.7.x stable release
  1551. Note: This exercise depicts about only compatible versions of Hadoop ecosystem tools and software downloaded from the official Apache hadoop website. Preferably use a stable release of the particular tool.
  1552.  
  1553. PROCEDURE:
  1554. 1) Installing Java
  1555. Hadoop is a framework written in Java for running applications on large clusters of commodity hardware. Hadoop needs Java 6 or above to work.
  1556. Step 1: Download tar and extract
  1557. Download Jdk tar.gz file for linux-62 bit, extract it into “/usr/local”
  1558. # cd /opt
  1559. # sudo tar zxvf /home/user/Downloads/jdk-8u5-linux-x64.tar.g
  1560. # cd /home/user/Downloads/jdk1.8.0_05
  1561. Step 2: Set Environments
  1562. • Open the “/etc/profile” file and Add the following line as per the version
  1563. • Set a environment for Java
  1564. • Use the root user to save the /etc/proflie or use gedit instead of vi .
  1565. • The 'profile' file contains commands that ought to be run for login shells
  1566. # sudo nano /etc/profile
  1567. #--insert JAVA_HOME JAVA_HOME=/home/user/Downloads/jdk1.8.0_05
  1568. #--in PATH variable just append at the end of the line
  1569. PATH=$PATH:$JAVA_HOME/bin
  1570. #--Append JAVA_HOME at end of the export statement export PATH JAVA_HOME
  1571. save the file using by pressing “Esc” key followed by :wq!
  1572. Step 3: Source the /etc/profile
  1573. # source /etc/profile
  1574. Step 4: Update the java alternatives
  1575. 1. By default OS will have a open jdk. Check by “java -version”. You will be prompt
  1576. “openJDK”
  1577. 2. If you also have openjdk installed then you'll need to update the java alternatives:
  1578. 3. If your system has more than one version of Java, configure which one your system causes by entering the following command in a terminal window
  1579. 4. By default OS will have a open jdk. Check by “java -version”. You will be prompt
  1580. “JavaHotSpot(TM) 64-Bit Server”
  1581. # update-alternatives --install "/usr/bin/java" java /home/user/Downloads/jdk1.8.0_05/bin/java"
  1582. 1)# update-alternatives --config java
  1583. --type selection number:
  1584. # java -version
  1585. 2) configure ssh
  1586. • Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your local machine if you want to use Hadoop on it (which is what we want to do in this exercise). For our single-node setup of Hadoop, we therefore need to configure SSH access to localhost
  1587. • The need to create a Password-less SSH Key generation based authentication is so that the master node can then login to slave nodes (and the secondary node) to start/stop them easily without any delays for authentication
  1588. • If you skip this step, then have to provide password
  1589. Generate an SSH key for the user. Then Enable password-less SSH access to yo sudo apt-get install openssh-server
  1590. --You will be asked to enter password, root@ubuntu # ssh localhost root@ubuntu# ssh-keygen root@ubuntu# ssh-copy-id -i localhost
  1591. --After above 2 steps, You will be connected without password, root@ubuntu# ssh localhost
  1592. root@ubuntu# exit
  1593. 3) Hadoop installation
  1594. • Now Download Hadoop from the official Apache, preferably a stable release version of
  1595. Hadoop 2.7.x and extract the contents of the Hadoop package to a location of your choice.
  1596.  
  1597. • For example, choose location as “/opt/”
  1598. Step 1: Download the tar.gz file of latest version Hadoop ( hadoop-2.7.x) from the official site .
  1599. Step 2: Extract (untar) the downloaded file from this commands to /opt/bigdata
  1600. root@ubuntu# cd /opt
  1601. root@ubuntu# sudo tar zxvf /home/user/Downloads/hadoop-2.7.0.tar.g
  1602. root@ubuntucd hadoop-2.7.0/
  1603. Like java, update Hadop environment variable in /etc/profile
  1604. # sudo nano /etc/profile
  1605. #--insert HADOOP_PREFIX HADOOP_PREFIX=/home/user/Downloads/hadoop-2.7.0
  1606. #--in PATH variable just append at the end of the line
  1607. PATH=$PATH:$HADOOP_PREFIX/bin
  1608. #--Append HADOOP_PREFIX at end of the export statement export PATH JAVA_HOME HADOOP_PREFIX
  1609. save the file using by pressing “Esc” key followed by :wq!
  1610. Step 3: Source the /etc/profile
  1611. # source /etc/profile
  1612. Verify Hadoop installation
  1613. # cd $HADOOP_PREFIX
  1614. # bin/hadoop version
  1615. 3.1) Modify the Hadoop Configuration Files
  1616. • In this section, we will configure the directory where Hadoop will store its configuration files, the network ports it listens to, etc. Our setup will use Hadoop Distributed File System,(HDFS), even though we are using only a single local machine.
  1617. • Add the following properties in the various hadoop configuration files which is available under $HADOOP_PREFIX/etc/hadoop/
  1618. • core-site.xml, hdfs-site.xml, mapred-site.xml & yarn-site.xml
  1619. Update Java, hadoop path to the Hadoop environment file
  1620. # cd $HADOOP_PREFIX/etc/hadoop
  1621. # nano hadoop-env.sh
  1622. Paste following line at beginning of the fill
  1623. export JAVA_HOME=/usr/local/jdk1.8.0_05 export HADOOP_PREFIX=/opt/hadoop-2.7.0
  1624. Modify the core-site.xml
  1625. # cd $HADOOP_PREFIX/etc/hadoop
  1626. # nano core-site.xml
  1627. Paste following between <configuration> tags
  1628. <configuration>
  1629. <property>
  1630. <name>fs.defaultFS</name>
  1631. <value>hdfs://localhost:9000</value>
  1632. </property>
  1633. </configuration>
  1634. Modify the hdfs-site.xml
  1635. # nano hdfs-site.xml
  1636. Paste following between <configuration> tags
  1637. <configuration>
  1638. <property>
  1639. <name>dfs.replication</name>
  1640. <value>1</value>
  1641. </property>
  1642. </configuration>
  1643. YARN configuration - Single Node modify the mapred-site.xml
  1644. # cp mapred-site.xml.template mapred-site.xml
  1645. # nano mapred-site.xml
  1646. Paste following between <configuration> tags
  1647. <configuration>
  1648. <property>
  1649. <name>mapreduce.framework.name</name>
  1650. <value>yarn</value>
  1651. </property>
  1652. </configuration> Modiy yarn-site.xml
  1653. # nano yarn-site.xml
  1654. Paste following between <configuration> tags
  1655. <configuration>
  1656. <property>
  1657. <name>yarn.nodemanager.aux-services</name>
  1658. <value>mapreduce_shuffle</value>
  1659. </property>
  1660. </configuration>
  1661. Formatting the HDFS file-system via the NameNode
  1662. • The first step to starting up your Hadoop installation is formatting the Hadoop files system which is implemented on top of the local file system of our “cluster” which includes only our local machine. We need to do this the first time you set up a Hadoop cluster.
  1663. • Do not format a running Hadoop file system as you will lose all the data currently in the cluster (in HDFS)
  1664. root@ubuntu# cd $HADOOP_PREFIX
  1665. root@ubuntu# bin/hadoop namenode -format
  1666.  
  1667.  
  1668. Start NameNode daemon and DataNode daemon: (port 50070)
  1669. root@ubuntu# sbin/start-dfs.sh
  1670. To know the running daemons jut type jps or /home/user/Downloads/jdk1.8.0_05/bin/jps
  1671.  
  1672. Start ResourceManager daemon and NodeManager daemon: (port 8088)
  1673. root@ubuntu# sbin/start-yarn.sh To stop the running process root@ubuntu# sbin/stop-dfs.sh
  1674. To know the running daemons jut type jps or /home/user/Downloads/jdk1.8.0_05/bin/jps
  1675.  
  1676. Start ResourceManager daemon and NodeManager daemon: (port 8088)
  1677. root@ubuntu# sbin/stop-yarn.sh
  1678. Make the HDFS directories required to execute MapReduce jobs:
  1679. $ bin/hdfs dfs -mkdir /user
  1680. $ bin/hdfs dfs -mkdir /user/mit
  1681. • Copy the input files into the distributed filesystem:
  1682. $ bin/hdfs dfs -put <input-path>/* /input
  1683. • Run some of the examples provided:
  1684. $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar grep /input
  1685. /output '(CSE)'
  1686. • Examine the output files:
  1687. Copy the output files from the distributed filesystem to the local filesystem and examine them:
  1688. $ bin/hdfs dfs -get output output
  1689. $ cat output/* or
  1690. • View the output files on the distributed filesystem:
  1691. $ bin/hdfs dfs -cat /output/*
  1692.  
  1693. Output:
  1694. Hadoop installation:
  1695.  
  1696.  
  1697.  
  1698. Create the HDFS directories:
  1699.  
  1700.  
  1701.  
  1702.  
  1703.  
  1704.  
  1705.  
  1706.  
  1707.  
  1708.  
  1709.  
  1710.  
  1711.  
  1712. Result:
  1713. Thus the one node Hadoop cluster is installed successfully.
  1714.  
  1715. EX. NO.: 15 Mount the one node Hadoop cluster using FUSE
  1716. Date:
  1717.  
  1718. Aim:
  1719. To mount the one node Hadoop cluster using FUSE.
  1720.  
  1721. Procedure:
  1722. Download the cdh3 repository from the internet.
  1723. $ wget http://archive.cloudera.com/one-click-install/maverick/cdh3-repository_1.0_all.deb
  1724. Add the cdh3 repository to default system repository.
  1725. $ sudo dpkg -i cdh3-repository_1.0_all.deb
  1726. Update the package information using the following command.
  1727. $ sudo apt-get update
  1728. Install the hadoop-fuse.
  1729. $ sudo apt-get install hadoop-0.20-fuse
  1730. Once fuse-dfs is installed, go ahead and mount HDFS using FUSE as follows:
  1731. $ sudo hadoop-fuse-dfs dfs://<name_node_hostname>:<namenode_port> <mount_point>
  1732.  
  1733. Once HDFS has been mounted at <mount_point>, you can use most of the traditional filesystem operations (e.g., cp, rm, cat, mv, mkdir, rmdir, more, scp). However, random write operations such as rsync, and permission related operations such as chmod, chown are not supported in FUSE-mounted HDFS.
  1734.  
  1735.  
  1736.  
  1737.  
  1738.  
  1739.  
  1740.  
  1741.  
  1742.  
  1743.  
  1744.  
  1745.  
  1746.  
  1747.  
  1748.  
  1749.  
  1750.  
  1751.  
  1752.  
  1753.  
  1754.  
  1755.  
  1756.  
  1757.  
  1758.  
  1759.  
  1760.  
  1761.  
  1762.  
  1763.  
  1764.  
  1765.  
  1766.  
  1767.  
  1768.  
  1769.  
  1770.  
  1771.  
  1772.  
  1773.  
  1774.  
  1775.  
  1776.  
  1777. Result:
  1778. Thus the one node Hadoop cluster is mounted using FUSE successfully.
  1779. EX. NO: 16 Write a word count program to demonstrate the use of Map and
  1780. Date: Reduce task
  1781.  
  1782. AIM:
  1783. Word count program to demonstrate the use of Map and Reduce tasks
  1784.  
  1785. PRE-REQUISITE:
  1786. ● Java version > 1.6 is installed and configured properly
  1787. ● Hadoop version 2.x is installed with proper configuration and hadoop daemons are running
  1788. PROCEDURE:
  1789. 1. Analyze the input file content
  1790. 2. Develop the code
  1791. a. Writing a map function
  1792. b. Writing a reduce function c. Writing the Driver class
  1793. 3. Compiling the source
  1794. 4. Building the JAR file
  1795. 5. Starting the DFS
  1796. 6. Creating Input path in HDFS and moving the data into Input path
  1797. 7. Executing the program
  1798. Sample Program:
  1799. import java.io.IOException;
  1800. import java.util.StringTokenizer;
  1801. import org.apache.hadoop.conf.Configuration;
  1802. import org.apache.hadoop.fs.Path;
  1803. import org.apache.hadoop.io.IntWritable;
  1804. import org.apache.hadoop.io.Text;
  1805. import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer;
  1806. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser;
  1807. public class WordCount{
  1808. //Step a
  1809. public static class TokenizerMapper extends Mapper < Object , Text, Text, IntWritable >
  1810. {
  1811. //hadoop supported data types
  1812. private final static IntWritable one = new IntWritable(1);
  1813. private Text word = new Text();
  1814. //map method that performs the tokenizer job and framing the initial key value pairs public void map( Object key, Text value, Context context) throws IOException ,
  1815. InterruptedException
  1816. {
  1817. //taking one line at a time and tokenizing the same
  1818. StringTokenizer itr = new StringTokenizer (value.toString());
  1819. //iterating through all the words available in that line and forming the key value pair while (itr.hasMoreTokens()){
  1820. word.set(itr.nextToken());
  1821. //sending to the context which inturn passes the same to reducer
  1822. context.write(word, one);
  1823.  
  1824. }
  1825. }
  1826. }
  1827.  
  1828. //Step b
  1829. public static class IntSumReducer extends Reducer < Text, IntWritable, Text, IntWritable >{
  1830. private IntWritable result = new IntWritable();
  1831. // Reduce method accepts the Key Value pairs from mappers, do the aggregation based on keys //
  1832. and produce the final output
  1833. public void reduce(Text key, Iterable < IntWritable > values, Context context) throws
  1834. IOException , InterruptedException{
  1835. int sum = 0;
  1836. /*iterates through all the values available with a key and
  1837. add them together and give the final result as the key and sum of its values*/
  1838. for (IntWritable val: values){
  1839. sum += val.get();
  1840. }
  1841. result.set(sum); context.write(key, result);
  1842. }
  1843. }
  1844.  
  1845. //Step c
  1846. public static void main( String [] args) throws Exception{
  1847. //creating conf instance for Job Configuration
  1848. Configuration conf = new Configuration();
  1849. //Parsing the command line arguments
  1850. String [] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs();
  1851. if (otherArgs.length < 2){
  1852. System .err.println( "Usage: wordcount <in> [<in>...]<out>" ); System .exit(2);
  1853. }
  1854. //Create a new Job creating a job object and assigning a job name for identification
  1855. //purposes
  1856. Job job = new Job(conf, "word count" );
  1857. job.setJarByClass(WordCount.class);
  1858. // Specify various job specific parameters job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class);
  1859. //Setting job object with the Data Type of output Key
  1860. job.setOutputKeyClass(Text.class);
  1861. //Setting job object with the Data Type of output value
  1862. job.setOutputValueClass(IntWritable.class);
  1863. //the hdfs input and output directory to be fetched from the command line
  1864. for ( int i = 0; i < otherArgs.length 1; ++i){
  1865. FileInputFormat.addInputPath(job, new Path(otherArgs[i]));
  1866. }
  1867.  
  1868.  
  1869.  
  1870. FileOutputFormat.setOutputPath(job, new Path(otherArgs[otherArgs.length 1]));
  1871. System .exit(job.waitForCompletion( true ) ? 0 : 1);
  1872. }
  1873. }
  1874. STEPS:
  1875. 1. Start NameNode daemon and DataNode daemon: (port 50070)
  1876. $ sbin/start-dfs.sh
  1877. 2. Start ResourceManager daemon and NodeManager daemon: (port 8088)
  1878. $ sbin/start-yarn.sh
  1879. 3. Make the HDFS directories required to execute MapReduce jobs:
  1880. $ bin/hdfs dfs -mkdir /user
  1881. 4. Copy the input files into the distributed filesystem:
  1882. $ bin/hdfs dfs -put <input-path>/* /input
  1883. Run some of the examples provided:
  1884. $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce- examples-2.7.0.jar wordcount /Cloud/file1.txt /op1
  1885. 16/05/28 14:07:04 INFO client.RMProxy: Connecting to ResourceManager at
  1886. /0.0.0.0:8032
  1887. 16/05/28 14:07:04 INFO input.FileInputFormat: Total input paths to process : 1
  1888. 16/05/28 14:07:04 INFO mapreduce.JobSubmitter: number of splits:1
  1889. 16/05/28 14:07:05 INFO mapreduce.JobSubmitter: Submitting tokens for job:
  1890. job_1464422714543_0004
  1891. 16/05/28 14:07:05 INFO impl.YarnClientImpl: Submitted application application_1464422714543_0004
  1892. 16/05/28 14:07:05 INFO mapreduce.Job: The url to track the job: http://PLLAB-
  1893. 49:8088/proxy/application_1464422714543_0004/
  1894. 16/05/28 14:07:05 INFO mapreduce.Job: Running job: job_1464422714543_0004
  1895. 16/05/28 14:07:10 INFO mapreduce.Job: Job job_1464422714543_0004 running in uber mode : false
  1896. 16/05/28 14:07:10 INFO mapreduce.Job: map 0% reduce 0%
  1897. 16/05/28 14:07:13 INFO mapreduce.Job: map 100% reduce 0%
  1898. 16/05/28 14:07:17 INFO mapreduce.Job: map 100% reduce 100%
  1899. 16/05/28 14:07:18 INFO mapreduce.Job: Job job_1464422714543_0004 completed
  1900. successfully
  1901. 16/05/28 14:07:18 INFO mapreduce.Job: Counters: 49
  1902. File System Counters
  1903. FILE: Number of bytes read=155
  1904. FILE: Number of bytes written=229563
  1905. FILE: Number of read operations=0
  1906. FILE: Number of large read operations=0
  1907. FILE: Number of write operations=0
  1908. HDFS: Number of bytes read=884837
  1909. HDFS: Number of bytes written=142
  1910. HDFS: Number of read operations=6
  1911.  
  1912.  
  1913. HDFS: Number of large read operations=0
  1914. HDFS: Number of write operations=2
  1915. Job Counters
  1916. Launched map tasks=1
  1917. Launched reduce tasks=1
  1918. Data-local map tasks=1
  1919. Total time spent by all maps in occupied slots (ms)=1615
  1920. Total time spent by all reduces in occupied slots (ms)=1768
  1921. Total time spent by all map tasks (ms)=1615
  1922. Total time spent by all reduce tasks (ms)=1768
  1923. Total vcore-seconds taken by all map tasks=1615
  1924. Total vcore-seconds taken by all reduce tasks=1768
  1925. Total megabyte-seconds taken by all map tasks=1653760
  1926. Total megabyte-seconds taken by all reduce tasks=1810432
  1927. Map-Reduce Framework
  1928. Map input records=1
  1929. Map output records=99348
  1930. Map output bytes=1282127
  1931. Map output materialized bytes=155
  1932. Input split bytes=102
  1933. Combine input records=99348
  1934. Combine output records=10
  1935. Reduce input groups=10
  1936. Reduce shuffle bytes=155
  1937. Reduce input records=10
  1938. Reduce output records=10
  1939. Spilled Records=20
  1940. Shuffled Maps =1
  1941. Failed Shuffles=0
  1942. Merged Map outputs=1
  1943. GC time elapsed (ms)=71
  1944. CPU time spent (ms)=1720
  1945. Physical memory (bytes) snapshot=454377472
  1946. Virtual memory (bytes) snapshot=3883495424
  1947. Total committed heap usage (bytes)=321388544
  1948. Shuffle Errors
  1949. BAD_ID=0
  1950. CONNECTION=0
  1951. IO_ERROR=0
  1952. WRONG_LENGTH=0
  1953. WRONG_MAP=0
  1954. WRONG_REDUCE=0
  1955. File Input Format Counters
  1956. Bytes Read=884735
  1957. File Output Format Counters
  1958. Bytes Written=142 user@ubuntu:~/hadoop-2.7.0$
  1959.  
  1960. In browser type “http://localhost:50070/” Utilities → Browse the file system
  1961.  
  1962.  
  1963.  
  1964.  
  1965.  
  1966.  
  1967.  
  1968.  
  1969.  
  1970.  
  1971.  
  1972.  
  1973.  
  1974.  
  1975.  
  1976.  
  1977.  
  1978.  
  1979.  
  1980.  
  1981.  
  1982.  
  1983.  
  1984.  
  1985.  
  1986.  
  1987.  
  1988.  
  1989.  
  1990.  
  1991.  
  1992.  
  1993.  
  1994.  
  1995.  
  1996.  
  1997. Result:
  1998. Thus the Word count program to use Map and reduce tasks is demonstrated successfully.
Add Comment
Please, Sign In to add comment