Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Only in ansible-1.6.6: ansible.egg-info
- diff -r ansible/ansible/bin/ansible ansible-1.6.6/bin/ansible
- 139c139
- < inventory_manager = inventory.Inventory(options.inventory, vault_password=vault_pass)
- ---
- > inventory_manager = inventory.Inventory(options.inventory)
- diff -r ansible/ansible/bin/ansible-playbook ansible-1.6.6/bin/ansible-playbook
- 102a103,107
- > inventory = ansible.inventory.Inventory(options.inventory)
- > inventory.subset(options.subset)
- > if len(inventory.list_hosts()) == 0:
- > raise errors.AnsibleError("provided hosts list is empty")
- >
- 108,112c113
- < options.ask_vault_pass = options.ask_vault_pass or C.DEFAULT_ASK_VAULT_PASS
- <
- < if options.listhosts or options.syntax or options.listtasks:
- < (_, _, _, vault_pass) = utils.ask_passwords(ask_vault_pass=options.ask_vault_pass)
- < else:
- ---
- > if not options.listhosts and not options.syntax and not options.listtasks:
- 113a115
- > options.ask_vault_pass = options.ask_vault_pass or C.DEFAULT_ASK_VAULT_PASS
- 118a121
- > options.ask_vault_pass = options.ask_vault_pass or C.DEFAULT_ASK_VAULT_PASS
- 123,130c126,133
- < if options.vault_password_file:
- < this_path = os.path.expanduser(options.vault_password_file)
- < try:
- < f = open(this_path, "rb")
- < tmp_vault_pass=f.read().strip()
- < f.close()
- < except (OSError, IOError), e:
- < raise errors.AnsibleError("Could not read %s: %s" % (this_path, e))
- ---
- > if options.vault_password_file:
- > this_path = os.path.expanduser(options.vault_password_file)
- > try:
- > f = open(this_path, "rb")
- > tmp_vault_pass=f.read().strip()
- > f.close()
- > except (OSError, IOError), e:
- > raise errors.AnsibleError("Could not read %s: %s" % (this_path, e))
- 132,133c135,136
- < if not options.ask_vault_pass:
- < vault_pass = tmp_vault_pass
- ---
- > if not options.ask_vault_pass:
- > vault_pass = tmp_vault_pass
- 158,162d160
- < inventory = ansible.inventory.Inventory(options.inventory, vault_password=vault_pass)
- < inventory.subset(options.subset)
- < if len(inventory.list_hosts()) == 0:
- < raise errors.AnsibleError("provided hosts list is empty")
- <
- 165a164,166
- > # let inventory know which playbooks are using so it can know the basedirs
- > inventory.set_playbook_basedir(os.path.dirname(playbook))
- >
- 209,210c210
- < play = ansible.playbook.Play(pb, play_ds, play_basedir,
- < vault_password=pb.vault_password)
- ---
- > play = ansible.playbook.Play(pb, play_ds, play_basedir)
- diff -r ansible/ansible/bin/ansible-pull ansible-1.6.6/bin/ansible-pull
- 153,156c153
- < if not options.inventory:
- < inv_opts = 'localhost,'
- < else:
- < inv_opts = options.inventory
- ---
- > inv_opts = 'localhost,'
- Only in ansible/ansible: CHANGELOG.md
- Only in ansible/ansible: CODING_GUIDELINES.md
- Only in ansible/ansible: CONTRIBUTING.md
- Only in ansible-1.6.6: deb-build
- Only in ansible-1.6.6: dist
- diff -r ansible/ansible/docs/man/man1/ansible.1 ansible-1.6.6/docs/man/man1/ansible.1
- 3,5c3,5
- < .\" Author: :doctype:manpage
- < .\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>
- < .\" Date: 05/26/2014
- ---
- > .\" Author: [see the "AUTHOR" section]
- > .\" Generator: DocBook XSL Stylesheets v1.75.2 <http://docbook.sf.net/>
- > .\" Date: 11/27/2013
- 7c7
- < .\" Source: Ansible 1.7
- ---
- > .\" Source: Ansible 1.4.1
- 10,19c10
- < .TH "ANSIBLE" "1" "05/26/2014" "Ansible 1\&.7" "System administration commands"
- < .\" -----------------------------------------------------------------
- < .\" * Define some portability stuff
- < .\" -----------------------------------------------------------------
- < .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- < .\" http://bugs.debian.org/507673
- < .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
- < .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- < .ie \n(.g .ds Aq \(aq
- < .el .ds Aq '
- ---
- > .TH "ANSIBLE" "1" "11/27/2013" "Ansible 1\&.4\&.1" "System administration commands"
- 37c28
- < \fBAnsible\fR is an extra\-simple tool/framework/API for doing \*(Aqremote things\*(Aq over SSH\&.
- ---
- > \fBAnsible\fR is an extra\-simple tool/framework/API for doing \'remote things\' over SSH\&.
- 85c76
- < \fB\-a\fR \*(Aq\fIARGUMENTS\fR\*(Aq, \fB\-\-args=\fR\*(Aq\fIARGUMENTS\fR\*(Aq
- ---
- > \fB\-a\fR \'\fIARGUMENTS\fR\', \fB\-\-args=\fR\'\fIARGUMENTS\fR\'
- 177c168
- < Ranges of hosts are also supported\&. For more information and additional options, see the documentation on http://docs\&.ansible\&.com/\&.
- ---
- > Ranges of hosts are also supported\&. For more information and additional options, see the documentation on http://ansible\&.github\&.com/\&.
- 208,214c199
- < Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
- < .SH "AUTHOR"
- < .PP
- < \fB:doctype:manpage\fR
- < .RS 4
- < Author.
- < .RE
- ---
- > Extensive documentation as well as IRC and mailing list info is available on the ansible home page: https://ansible\&.github\&.com/
- diff -r ansible/ansible/docs/man/man1/ansible-doc.1 ansible-1.6.6/docs/man/man1/ansible-doc.1
- 3,5c3,5
- < .\" Author: :doctype:manpage
- < .\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>
- < .\" Date: 05/26/2014
- ---
- > .\" Author: [see the "AUTHOR" section]
- > .\" Generator: DocBook XSL Stylesheets v1.75.2 <http://docbook.sf.net/>
- > .\" Date: 11/27/2013
- 7c7
- < .\" Source: Ansible 1.7
- ---
- > .\" Source: Ansible 1.4.1
- 10,19c10
- < .TH "ANSIBLE\-DOC" "1" "05/26/2014" "Ansible 1\&.7" "System administration commands"
- < .\" -----------------------------------------------------------------
- < .\" * Define some portability stuff
- < .\" -----------------------------------------------------------------
- < .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- < .\" http://bugs.debian.org/507673
- < .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
- < .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- < .ie \n(.g .ds Aq \(aq
- < .el .ds Aq '
- ---
- > .TH "ANSIBLE\-DOC" "1" "11/27/2013" "Ansible 1\&.4\&.1" "System administration commands"
- 66,72c57
- < Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
- < .SH "AUTHOR"
- < .PP
- < \fB:doctype:manpage\fR
- < .RS 4
- < Author.
- < .RE
- ---
- > Extensive documentation as well as IRC and mailing list info is available on the ansible home page: https://ansible\&.github\&.com/
- diff -r ansible/ansible/docs/man/man1/ansible-galaxy.1 ansible-1.6.6/docs/man/man1/ansible-galaxy.1
- 4,5c4,5
- < .\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>
- < .\" Date: 05/26/2014
- ---
- > .\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
- > .\" Date: 03/16/2014
- 7c7
- < .\" Source: Ansible 1.7
- ---
- > .\" Source: Ansible 1.6
- 10c10
- < .TH "ANSIBLE\-GALAXY" "1" "05/26/2014" "Ansible 1\&.7" "System administration commands"
- ---
- > .TH "ANSIBLE\-GALAXY" "1" "03/16/2014" "Ansible 1\&.6" "System administration commands"
- diff -r ansible/ansible/docs/man/man1/ansible-playbook.1 ansible-1.6.6/docs/man/man1/ansible-playbook.1
- 4,5c4,5
- < .\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>
- < .\" Date: 05/26/2014
- ---
- > .\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
- > .\" Date: 02/12/2014
- 7c7
- < .\" Source: Ansible 1.7
- ---
- > .\" Source: Ansible 1.5
- 10c10
- < .TH "ANSIBLE\-PLAYBOOK" "1" "05/26/2014" "Ansible 1\&.7" "System administration commands"
- ---
- > .TH "ANSIBLE\-PLAYBOOK" "1" "02/12/2014" "Ansible 1\&.5" "System administration commands"
- 94c94,154
- < \fB\-t\fR, \fITAGS\fR, \fB\-\-tags=\fR\fITAGS\fR
- ---
- > \fB\-S\fR, \fB\-\-su\fR
- > .RS 4
- > run operations with su\&.
- > .RE
- > .PP
- > \fB\-\-ask\-su\-pass\fR
- > .RS 4
- > Prompt for the password to use for playbook plays that request su access, if any\&.
- > .RE
- > .PP
- > \fB\-R\fR, \fISU_USER\fR, \fB\-\-sudo\-user=\fR\fISU_USER\fR
- > .RS 4
- > Desired su user (default=root)\&.
- > .RE
- > .PP
- > \fB\-\-ask\-vault\-pass\fR
- > .RS 4
- > Ask for vault password\&.
- > .RE
- > .PP
- > \fB\-\-vault\-password\-file=\fR\fIVAULT_PASSWORD_FILE\fR
- > .RS 4
- > Vault password file\&.
- > .RE
- > .PP
- > \fB\-\-force\-handlers\fR
- > .RS 4
- > Run play handlers even if a task fails\&.
- > .RE
- > .PP
- > \fB\-\-list\-hosts\fR
- > .RS 4
- > Outputs a list of matching hosts without executing anything else\&.
- > .RE
- > .PP
- > \fB\-\-list\-tasks\fR
- > .RS 4
- > List all tasks that would be executed\&.
- > .RE
- > .PP
- > \fB\-\-start\-at\-task=\fR\fISTART_AT\fR
- > .RS 4
- > Start the playbook at the task matching this name\&.
- > .RE
- > .PP
- > \fB\-\-step\fR
- > .RS 4
- > one-step-at-a-time: confirm each task before running\&.
- > .RE
- > .PP
- > \fB\-\-syntax\-check\fR
- > .RS 4
- > Perform a syntax check on the playbook, but do not execute it\&.
- > .RE
- > .PP
- > \fB\-\-private\-key\fR
- > .RS 4
- > Use this file to authenticate the connection\&.
- > .RE
- > .PP
- > \fB\-t\fR, \fITAGS\fR, \fB\fI\-\-tags=\fR\fR\fB\*(AqTAGS\fR
- 99c159
- < \fB\-\-skip\-tags=\fR\fISKIP_TAGS\fR
- ---
- > \fB\fI\-\-skip\-tags=\fR\fR\fB\*(AqSKIP_TAGS\fR
- 149a210,216
- >
- > .PP
- > \fB\-\-version\fR
- > .RS 4
- > Show program's version number and exit\&.
- > .RE
- >
- diff -r ansible/ansible/docs/man/man1/ansible-pull.1 ansible-1.6.6/docs/man/man1/ansible-pull.1
- 5c5
- < .\" Date: 05/26/2014
- ---
- > .\" Date: 01/02/2014
- 7c7
- < .\" Source: Ansible 1.7
- ---
- > .\" Source: Ansible 1.5
- 10,19c10
- < .TH "ANSIBLE" "1" "05/26/2014" "Ansible 1\&.7" "System administration commands"
- < .\" -----------------------------------------------------------------
- < .\" * Define some portability stuff
- < .\" -----------------------------------------------------------------
- < .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- < .\" http://bugs.debian.org/507673
- < .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
- < .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- < .ie \n(.g .ds Aq \(aq
- < .el .ds Aq '
- ---
- > .TH "ANSIBLE" "1" "01/03/2014" "Ansible 1\&.5" "System administration commands"
- 106c97
- < Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
- ---
- > Extensive documentation as well as IRC and mailing list info is available on the ansible home page: https://ansible\&.github\&.com/
- diff -r ansible/ansible/docs/man/man1/ansible-vault.1 ansible-1.6.6/docs/man/man1/ansible-vault.1
- 4,5c4,5
- < .\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>
- < .\" Date: 05/26/2014
- ---
- > .\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
- > .\" Date: 03/17/2014
- 7c7
- < .\" Source: Ansible 1.7
- ---
- > .\" Source: Ansible 1.6
- 10c10
- < .TH "ANSIBLE\-VAULT" "1" "05/26/2014" "Ansible 1\&.7" "System administration commands"
- ---
- > .TH "ANSIBLE\-VAULT" "1" "03/17/2014" "Ansible 1\&.6" "System administration commands"
- Only in ansible/ansible: docsite
- Only in ansible/ansible/examples: DOCUMENTATION.yml
- Only in ansible/ansible/examples: issues
- Only in ansible/ansible/examples: scripts
- Only in ansible/ansible: .git
- Only in ansible/ansible: .gitignore
- Only in ansible/ansible: hacking
- Only in ansible/ansible: legacy
- diff -r ansible/ansible/lib/ansible/callback_plugins/noop.py ansible-1.6.6/lib/ansible/callback_plugins/noop.py
- 43a44,46
- > def runner_on_error(self, host, msg):
- > pass
- >
- 89c92
- < def playbook_on_play_start(self, name):
- ---
- > def playbook_on_play_start(self, pattern):
- diff -r ansible/ansible/lib/ansible/callbacks.py ansible-1.6.6/lib/ansible/callbacks.py
- 345a346,348
- > def on_error(self, host, msg):
- > call_callback_module('runner_on_error', host, msg)
- >
- 403a407,410
- > def on_error(self, host, err):
- > display("err: [%s] => %s\n" % (host, err), stderr=True, runner=self.runner)
- > super(CliRunnerCallbacks, self).on_error(host, err)
- >
- 529a537,548
- > def on_error(self, host, err):
- >
- > item = err.get('item', None)
- > msg = ''
- > if item:
- > msg = "err: [%s] => (item=%s) => %s" % (host, item, err)
- > else:
- > msg = "err: [%s] => %s" % (host, err)
- >
- > display(msg, color='red', stderr=True, runner=self.runner)
- > super(PlaybookRunnerCallbacks, self).on_error(host, err)
- >
- 627c646
- < if prompt and default is not None:
- ---
- > if prompt and default:
- 635d653
- < msg = prompt.encode(sys.stdout.encoding)
- 637,638c655,656
- < return getpass.getpass(msg)
- < return raw_input(msg)
- ---
- > return getpass.getpass(prompt)
- > return raw_input(prompt)
- 679,681c697,699
- < def on_play_start(self, name):
- < display(banner("PLAY [%s]" % name))
- < call_callback_module('playbook_on_play_start', name)
- ---
- > def on_play_start(self, pattern):
- > display(banner("PLAY [%s]" % pattern))
- > call_callback_module('playbook_on_play_start', pattern)
- diff -r ansible/ansible/lib/ansible/constants.py ansible-1.6.6/lib/ansible/constants.py
- 107c107
- < DEFAULT_ROLES_PATH = shell_expand_path(get_config(p, DEFAULTS, 'roles_path', 'ANSIBLE_ROLES_PATH', '/etc/ansible/roles'))
- ---
- > DEFAULT_ROLES_PATH = get_config(p, DEFAULTS, 'roles_path', 'ANSIBLE_ROLES_PATH', '/etc/ansible/roles')
- Only in ansible-1.6.6/lib/ansible: constants.pyc
- diff -r ansible/ansible/lib/ansible/__init__.py ansible-1.6.6/lib/ansible/__init__.py
- 17c17
- < __version__ = '1.7'
- ---
- > __version__ = '1.6.6'
- Only in ansible-1.6.6/lib/ansible: __init__.pyc
- diff -r ansible/ansible/lib/ansible/inventory/dir.py ansible-1.6.6/lib/ansible/inventory/dir.py
- 2d1
- < # (c) 2014, Serge van Ginderachter <[email protected]>
- 60,221c59,81
- <
- < # retrieve all groups and hosts form the parser and add them to
- < # self, don't look at group lists yet, to avoid
- < # recursion trouble, but just make sure all objects exist in self
- < newgroups = parser.groups.values()
- < for group in newgroups:
- < for host in group.hosts:
- < self._add_host(host)
- < for group in newgroups:
- < self._add_group(group)
- <
- < # now check the objects lists so they contain only objects from
- < # self; membership data in groups is already fine (except all &
- < # ungrouped, see later), but might still reference objects not in self
- < for group in self.groups.values():
- < # iterate on a copy of the lists, as those lists get changed in
- < # the loop
- < # list with group's child group objects:
- < for child in group.child_groups[:]:
- < if child != self.groups[child.name]:
- < group.child_groups.remove(child)
- < group.child_groups.append(self.groups[child.name])
- < # list with group's parent group objects:
- < for parent in group.parent_groups[:]:
- < if parent != self.groups[parent.name]:
- < group.parent_groups.remove(parent)
- < group.parent_groups.append(self.groups[parent.name])
- < # list with group's host objects:
- < for host in group.hosts[:]:
- < if host != self.hosts[host.name]:
- < group.hosts.remove(host)
- < group.hosts.append(self.hosts[host.name])
- < # also check here that the group that contains host, is
- < # also contained in the host's group list
- < if group not in self.hosts[host.name].groups:
- < self.hosts[host.name].groups.append(group)
- <
- < # extra checks on special groups all and ungrouped
- < # remove hosts from 'ungrouped' if they became member of other groups
- < if 'ungrouped' in self.groups:
- < ungrouped = self.groups['ungrouped']
- < # loop on a copy of ungrouped hosts, as we want to change that list
- < for host in ungrouped.hosts[:]:
- < if len(host.groups) > 1:
- < host.groups.remove(ungrouped)
- < ungrouped.hosts.remove(host)
- <
- < # remove hosts from 'all' if they became member of other groups
- < # all should only contain direct children, not grandchildren
- < # direct children should have dept == 1
- < if 'all' in self.groups:
- < allgroup = self.groups['all' ]
- < # loop on a copy of all's child groups, as we want to change that list
- < for group in allgroup.child_groups[:]:
- < # groups might once have beeen added to all, and later be added
- < # to another group: we need to remove the link wit all then
- < if len(group.parent_groups) > 1:
- < # real children of all have just 1 parent, all
- < # this one has more, so not a direct child of all anymore
- < group.parent_groups.remove(allgroup)
- < allgroup.child_groups.remove(group)
- < elif allgroup not in group.parent_groups:
- < # this group was once added to all, but doesn't list it as
- < # a parent any more; the info in the group is the correct
- < # info
- < allgroup.child_groups.remove(group)
- <
- <
- < def _add_group(self, group):
- < """ Merge an existing group or add a new one;
- < Track parent and child groups, and hosts of the new one """
- <
- < if group.name not in self.groups:
- < # it's brand new, add him!
- < self.groups[group.name] = group
- < if self.groups[group.name] != group:
- < # different object, merge
- < self._merge_groups(self.groups[group.name], group)
- <
- < def _add_host(self, host):
- < if host.name not in self.hosts:
- < # Papa's got a brand new host
- < self.hosts[host.name] = host
- < if self.hosts[host.name] != host:
- < # different object, merge
- < self._merge_hosts(self.hosts[host.name], host)
- <
- < def _merge_groups(self, group, newgroup):
- < """ Merge all of instance newgroup into group,
- < update parent/child relationships
- < group lists may still contain group objects that exist in self with
- < same name, but was instanciated as a different object in some other
- < inventory parser; these are handled later """
- <
- < # name
- < if group.name != newgroup.name:
- < raise errors.AnsibleError("Cannot merge group %s with %s" % (group.name, newgroup.name))
- <
- < # depth
- < group.depth = max([group.depth, newgroup.depth])
- <
- < # hosts list (host objects are by now already added to self.hosts)
- < for host in newgroup.hosts:
- < grouphosts = dict([(h.name, h) for h in group.hosts])
- < if host.name in grouphosts:
- < # same host name but different object, merge
- < self._merge_hosts(grouphosts[host.name], host)
- < else:
- < # new membership, add host to group from self
- < # group from self will also be added again to host.groups, but
- < # as different object
- < group.add_host(self.hosts[host.name])
- < # now remove this the old object for group in host.groups
- < for hostgroup in [g for g in host.groups]:
- < if hostgroup.name == group.name and hostgroup != self.groups[group.name]:
- < self.hosts[host.name].groups.remove(hostgroup)
- <
- <
- < # group child membership relation
- < for newchild in newgroup.child_groups:
- < # dict with existing child groups:
- < childgroups = dict([(g.name, g) for g in group.child_groups])
- < # check if child of new group is already known as a child
- < if newchild.name not in childgroups:
- < self.groups[group.name].add_child_group(newchild)
- <
- < # group parent membership relation
- < for newparent in newgroup.parent_groups:
- < # dict with existing parent groups:
- < parentgroups = dict([(g.name, g) for g in group.parent_groups])
- < # check if parent of new group is already known as a parent
- < if newparent.name not in parentgroups:
- < if newparent.name not in self.groups:
- < # group does not exist yet in self, import him
- < self.groups[newparent.name] = newparent
- < # group now exists but not yet as a parent here
- < self.groups[newparent.name].add_child_group(group)
- <
- < # variables
- < group.vars = utils.combine_vars(group.vars, newgroup.vars)
- <
- < def _merge_hosts(self,host, newhost):
- < """ Merge all of instance newhost into host """
- <
- < # name
- < if host.name != newhost.name:
- < raise errors.AnsibleError("Cannot merge host %s with %s" % (host.name, newhost.name))
- <
- < # group membership relation
- < for newgroup in newhost.groups:
- < # dict with existing groups:
- < hostgroups = dict([(g.name, g) for g in host.groups])
- < # check if new group is already known as a group
- < if newgroup.name not in hostgroups:
- < if newgroup.name not in self.groups:
- < # group does not exist yet in self, import him
- < self.groups[newgroup.name] = newgroup
- < # group now exists but doesn't have host yet
- < self.groups[newgroup.name].add_host(host)
- <
- < # variables
- < host.vars = utils.combine_vars(host.vars, newhost.vars)
- ---
- > # This takes a lot of code because we can't directly use any of the objects, as they have to blend
- > for name, group in parser.groups.iteritems():
- > if name not in self.groups:
- > self.groups[name] = group
- > else:
- > # group is already there, copy variables
- > # note: depth numbers on duplicates may be bogus
- > for k, v in group.get_variables().iteritems():
- > self.groups[name].set_variable(k, v)
- > for host in group.get_hosts():
- > if host.name not in self.hosts:
- > self.hosts[host.name] = host
- > else:
- > # host is already there, copy variables
- > # note: depth numbers on duplicates may be bogus
- > for k, v in host.vars.iteritems():
- > self.hosts[host.name].set_variable(k, v)
- > self.groups[name].add_host(self.hosts[host.name])
- >
- > # This needs to be a second loop to ensure all the parent groups exist
- > for name, group in parser.groups.iteritems():
- > for ancestor in group.get_ancestors():
- > self.groups[ancestor.name].add_child_group(self.groups[name])
- diff -r ansible/ansible/lib/ansible/inventory/group.py ansible-1.6.6/lib/ansible/inventory/group.py
- 31,32c31
- < self._hosts_cache = None
- < #self.clear_hosts_cache()
- ---
- > self.clear_hosts_cache()
- 44,45d42
- <
- < # update the depth of the child
- 47,55c44
- <
- < # update the depth of the grandchildren
- < group._check_children_depth()
- <
- < # now add self to child's parent_groups list, but only if there
- < # isn't already a group with the same name
- < if not self.name in [g.name for g in group.parent_groups]:
- < group.parent_groups.append(self)
- <
- ---
- > group.parent_groups.append(self)
- 57,62d45
- <
- < def _check_children_depth(self):
- <
- < for group in self.child_groups:
- < group.depth = max([self.depth+1, group.depth])
- < group._check_children_depth()
- diff -r ansible/ansible/lib/ansible/inventory/ini.py ansible-1.6.6/lib/ansible/inventory/ini.py
- 48d47
- < self._add_allgroup_children()
- 73,79d71
- < def _add_allgroup_children(self):
- <
- < for group in self.groups.values():
- < if group.depth == 0 and group.name != 'all':
- < self.groups['all'].add_child_group(group)
- <
- <
- 97a90
- > all.add_child_group(new_group)
- 100a94
- > all.add_child_group(new_group)
- diff -r ansible/ansible/lib/ansible/inventory/__init__.py ansible-1.6.6/lib/ansible/inventory/__init__.py
- 18a19
- >
- 41c42
- < '_pattern_cache', '_vault_password', '_vars_plugins', '_playbook_basedir']
- ---
- > '_pattern_cache', '_vars_plugins', '_playbook_basedir']
- 43c44
- < def __init__(self, host_list=C.DEFAULT_HOST_LIST, vault_password=None):
- ---
- > def __init__(self, host_list=C.DEFAULT_HOST_LIST):
- 48d48
- < self._vault_password=vault_password
- 59c59
- < # to be set by calling set_playbook_basedir by playbook code
- ---
- > # to be set by calling set_playbook_basedir by ansible-playbook
- 143,150d142
- < # get group vars from group_vars/ files and vars plugins
- < for group in self.groups:
- < group.vars = utils.combine_vars(group.vars, self.get_group_variables(group.name, self._vault_password))
- <
- < # get host vars from host_vars/ files and vars plugins
- < for host in self.get_hosts():
- < host.vars = utils.combine_vars(host.vars, self.get_variables(host.name, self._vault_password))
- <
- 158,168d149
- < def _match_list(self, items, item_attr, pattern_str):
- < results = []
- < if not pattern_str.startswith('~'):
- < pattern = re.compile(fnmatch.translate(pattern_str))
- < else:
- < pattern = re.compile(pattern_str[1:])
- < for item in items:
- < if pattern.search(getattr(item, item_attr)):
- < results.append(item)
- < return results
- <
- 209c190
- < elif p:
- ---
- > else:
- 224,226c205,209
- < # avoid resolving a pattern that is a plain host
- < if p in self._hosts_cache:
- < hosts.append(self.get_host(p))
- ---
- > that = self.__get_hosts(p)
- > if p.startswith("!"):
- > hosts = [ h for h in hosts if h not in that ]
- > elif p.startswith("&"):
- > hosts = [ h for h in hosts if h in that ]
- 228,235c211,213
- < that = self.__get_hosts(p)
- < if p.startswith("!"):
- < hosts = [ h for h in hosts if h not in that ]
- < elif p.startswith("&"):
- < hosts = [ h for h in hosts if h in that ]
- < else:
- < to_append = [ h for h in that if h.name not in [ y.name for y in hosts ] ]
- < hosts.extend(to_append)
- ---
- > to_append = [ h for h in that if h.name not in [ y.name for y in hosts ] ]
- > hosts.extend(to_append)
- >
- 260,263d237
- < # Do not parse regexes for enumeration info
- < if pattern.startswith('~'):
- < return (pattern, None)
- <
- 326d299
- < results = []
- 333,337c306
- < def __append_host_to_results(host):
- < if host not in results and host.name not in hostnames:
- < hostnames.add(host.name)
- < results.append(host)
- <
- ---
- > results = []
- 340,350c309,313
- < if pattern == 'all':
- < for host in group.get_hosts():
- < __append_host_to_results(host)
- < else:
- < if self._match(group.name, pattern):
- < for host in group.get_hosts():
- < __append_host_to_results(host)
- < else:
- < matching_hosts = self._match_list(group.get_hosts(), 'name', pattern)
- < for host in matching_hosts:
- < __append_host_to_results(host)
- ---
- > for host in group.get_hosts():
- > if pattern == 'all' or self._match(group.name, pattern) or self._match(host.name, pattern):
- > if host not in results and host.name not in hostnames:
- > results.append(host)
- > hostnames.add(host.name)
- 362,365c325,332
- < if host in self._hosts_cache:
- < return self._hosts_cache[host].get_groups()
- < else:
- < return []
- ---
- > results = []
- > groups = self.get_groups()
- > for group in groups:
- > for hostn in group.get_hosts():
- > if host == hostn.name:
- > results.append(group)
- > continue
- > return results
- 406,408c373,375
- < def get_group_variables(self, groupname, update_cached=False, vault_password=None):
- < if groupname not in self._vars_per_group or update_cached:
- < self._vars_per_group[groupname] = self._get_group_variables(groupname, vault_password=vault_password)
- ---
- > def get_group_variables(self, groupname):
- > if groupname not in self._vars_per_group:
- > self._vars_per_group[groupname] = self._get_group_variables(groupname)
- 411,412c378
- < def _get_group_variables(self, groupname, vault_password=None):
- <
- ---
- > def _get_group_variables(self, groupname):
- 415a382
- > return group.get_variables()
- 417,434c384,385
- < vars = {}
- <
- < # plugin.get_group_vars retrieves just vars for specific group
- < vars_results = [ plugin.get_group_vars(group, vault_password=vault_password) for plugin in self._vars_plugins if hasattr(plugin, 'get_group_vars')]
- < for updated in vars_results:
- < if updated is not None:
- < vars = utils.combine_vars(vars, updated)
- <
- < # get group variables set by Inventory Parsers
- < vars = utils.combine_vars(vars, group.get_variables())
- <
- < # Read group_vars/ files
- < vars = utils.combine_vars(vars, self.get_group_vars(group))
- <
- < return vars
- <
- < def get_variables(self, hostname, update_cached=False, vault_password=None):
- < if hostname not in self._vars_per_host or update_cached:
- ---
- > def get_variables(self, hostname, vault_password=None):
- > if hostname not in self._vars_per_host:
- 445,447c396
- <
- < # plugin.run retrieves all vars (also from groups) for host
- < vars_results = [ plugin.run(host, vault_password=vault_password) for plugin in self._vars_plugins if hasattr(plugin, 'run')]
- ---
- > vars_results = [ plugin.run(host, vault_password=vault_password) for plugin in self._vars_plugins ]
- 452,458d400
- < # plugin.get_host_vars retrieves just vars for specific host
- < vars_results = [ plugin.get_host_vars(host, vault_password=vault_password) for plugin in self._vars_plugins if hasattr(plugin, 'get_host_vars')]
- < for updated in vars_results:
- < if updated is not None:
- < vars = utils.combine_vars(vars, updated)
- <
- < # get host variables set by Inventory Parsers
- 460,463d401
- <
- < # still need to check InventoryParser per host vars
- < # which actually means InventoryScript per host,
- < # which is not performant
- 466,469d403
- <
- < # Read host_vars/ files
- < vars = utils.combine_vars(vars, self.get_host_vars(host))
- <
- 473,477c407,408
- < if group.name not in self.groups_list():
- < self.groups.append(group)
- < self._groups_list = None # invalidate internal cache
- < else:
- < raise errors.AnsibleError("group already in inventory: %s" % group.name)
- ---
- > self.groups.append(group)
- > self._groups_list = None # invalidate internal cache
- 571a503,505
- > """
- > sets the base directory of the playbook so inventory plugins can use it to find
- > variable files and other things.
- 573,612c507
- < sets the base directory of the playbook so inventory can use it as a
- < basedir for host_ and group_vars, and other things.
- < """
- < # Only update things if dir is a different playbook basedir
- < if dir != self._playbook_basedir:
- < self._playbook_basedir = dir
- < # get group vars from group_vars/ files
- < for group in self.groups:
- < group.vars = utils.combine_vars(group.vars, self.get_group_vars(group, new_pb_basedir=True))
- < # get host vars from host_vars/ files
- < for host in self.get_hosts():
- < host.vars = utils.combine_vars(host.vars, self.get_host_vars(host, new_pb_basedir=True))
- <
- < def get_host_vars(self, host, new_pb_basedir=False):
- < """ Read host_vars/ files """
- < return self._get_hostgroup_vars(host=host, group=None, new_pb_basedir=False)
- <
- < def get_group_vars(self, group, new_pb_basedir=False):
- < """ Read group_vars/ files """
- < return self._get_hostgroup_vars(host=None, group=group, new_pb_basedir=False)
- <
- < def _get_hostgroup_vars(self, host=None, group=None, new_pb_basedir=False):
- < """
- < Loads variables from group_vars/<groupname> and host_vars/<hostname> in directories parallel
- < to the inventory base directory or in the same directory as the playbook. Variables in the playbook
- < dir will win over the inventory dir if files are in both.
- < """
- <
- < results = {}
- < scan_pass = 0
- < _basedir = self.basedir()
- <
- < # look in both the inventory base directory and the playbook base directory
- < # unless we do an update for a new playbook base dir
- < if not new_pb_basedir:
- < basedirs = [_basedir, self._playbook_basedir]
- < else:
- < basedirs = [self._playbook_basedir]
- <
- < for basedir in basedirs:
- ---
- > self._playbook_basedir = dir
- 614,640d508
- < # this can happen from particular API usages, particularly if not run
- < # from /usr/bin/ansible-playbook
- < if basedir is None:
- < continue
- <
- < scan_pass = scan_pass + 1
- <
- < # it's not an eror if the directory does not exist, keep moving
- < if not os.path.exists(basedir):
- < continue
- <
- < # save work of second scan if the directories are the same
- < if _basedir == self._playbook_basedir and scan_pass != 1:
- < continue
- <
- < if group and host is None:
- < # load vars in dir/group_vars/name_of_group
- < base_path = os.path.join(basedir, "group_vars/%s" % group.name)
- < results = utils.load_vars(base_path, results, vault_password=self._vault_password)
- <
- < elif host and group is None:
- < # same for hostvars in dir/host_vars/name_of_host
- < base_path = os.path.join(basedir, "host_vars/%s" % host.name)
- < results = utils.load_vars(base_path, results, vault_password=self._vault_password)
- <
- < # all done, results is a dictionary of variables for this particular host.
- < return results
- diff -r ansible/ansible/lib/ansible/inventory/script.py ansible-1.6.6/lib/ansible/inventory/script.py
- 49d48
- <
- 53,56c52
- <
- < # not passing from_remote because data from CMDB is trusted
- < self.raw = utils.parse_json(self.data)
- <
- ---
- > self.raw = utils.parse_json(self.data, from_remote=True)
- 67c63
- <
- ---
- >
- 103a100,101
- > if group.name != all.name:
- > all.add_child_group(group)
- 113,117d110
- <
- < for group in groups.values():
- < if group.depth == 0 and group.name != 'all':
- < all.add_child_group(group)
- <
- Only in ansible-1.6.6/lib/ansible/inventory/vars_plugins: group_vars.py
- Only in ansible/ansible/lib/ansible/inventory/vars_plugins: noop.py
- diff -r ansible/ansible/lib/ansible/module_common.py ansible-1.6.6/lib/ansible/module_common.py
- 32d31
- < REPLACER_WINDOWS = "# POWERSHELL_COMMON"
- 50c49,51
- < ... will result in the insertion basic.py into the module
- ---
- > will result in a template evaluation of
- >
- > {{ include 'basic.py' }}
- 56,60d56
- <
- < # POWERSHELL_COMMON
- <
- < Also results in the inclusion of the common code in powershell.ps1
- <
- 104,107d99
- < if REPLACER_WINDOWS in line:
- < ps_data = self.slurp(os.path.join(self.snippet_path, "powershell.ps1"))
- < output.write(ps_data)
- < snippet_names.append('powershell')
- 127,134c119,120
- < if not module_path.endswith(".ps1"):
- < # Unixy modules
- < if len(snippet_names) > 0 and not 'basic' in snippet_names:
- < raise errors.AnsibleError("missing required import in %s: from ansible.module_utils.basic import *" % module_path)
- < else:
- < # Windows modules
- < if len(snippet_names) > 0 and not 'powershell' in snippet_names:
- < raise errors.AnsibleError("missing required import in %s: # POWERSHELL_COMMON" % module_path)
- ---
- > if len(snippet_names) > 0 and not 'basic' in snippet_names:
- > raise errors.AnsibleError("missing required import in %s: from ansible.module_utils.basic import *" % module_path)
- diff -r ansible/ansible/lib/ansible/module_utils/basic.py ansible-1.6.6/lib/ansible/module_utils/basic.py
- 104,141d103
- < try:
- < from ast import literal_eval as _literal_eval
- < except ImportError:
- < # a replacement for literal_eval that works with python 2.4. from:
- < # https://mail.python.org/pipermail/python-list/2009-September/551880.html
- < # which is essentially a cut/past from an earlier (2.6) version of python's
- < # ast.py
- < from compiler import parse
- < from compiler.ast import *
- < def _literal_eval(node_or_string):
- < """
- < Safely evaluate an expression node or a string containing a Python
- < expression. The string or node provided may only consist of the following
- < Python literal structures: strings, numbers, tuples, lists, dicts, booleans,
- < and None.
- < """
- < _safe_names = {'None': None, 'True': True, 'False': False}
- < if isinstance(node_or_string, basestring):
- < node_or_string = parse(node_or_string, mode='eval')
- < if isinstance(node_or_string, Expression):
- < node_or_string = node_or_string.node
- < def _convert(node):
- < if isinstance(node, Const) and isinstance(node.value, (basestring, int, float, long, complex)):
- < return node.value
- < elif isinstance(node, Tuple):
- < return tuple(map(_convert, node.nodes))
- < elif isinstance(node, List):
- < return list(map(_convert, node.nodes))
- < elif isinstance(node, Dict):
- < return dict((_convert(k), _convert(v)) for k, v in node.items)
- < elif isinstance(node, Name):
- < if node.name in _safe_names:
- < return _safe_names[node.name]
- < elif isinstance(node, UnarySub):
- < return -_convert(node.expr)
- < raise ValueError('malformed string')
- < return _convert(node_or_string)
- <
- 183,194d144
- < def get_distribution_version():
- < ''' return the distribution version '''
- < if platform.system() == 'Linux':
- < try:
- < distribution_version = platform.linux_distribution()[1]
- < except:
- < # FIXME: MethodMissing, I assume?
- < distribution_version = platform.dist()[1]
- < else:
- < distribution_version = None
- < return distribution_version
- <
- 739,770d688
- < def safe_eval(self, str, locals=None, include_exceptions=False):
- <
- < # do not allow method calls to modules
- < if not isinstance(str, basestring):
- < # already templated to a datastructure, perhaps?
- < if include_exceptions:
- < return (str, None)
- < return str
- < if re.search(r'\w\.\w+\(', str):
- < if include_exceptions:
- < return (str, None)
- < return str
- < # do not allow imports
- < if re.search(r'import \w+', str):
- < if include_exceptions:
- < return (str, None)
- < return str
- < try:
- < result = None
- < if not locals:
- < result = _literal_eval(str)
- < else:
- < result = _literal_eval(str, None, locals)
- < if include_exceptions:
- < return (result, None)
- < else:
- < return result
- < except Exception, e:
- < if include_exceptions:
- < return (str, e)
- < return str
- <
- 1222d1139
- < r'^(?P<before>.*:)(?P<password>.*)(?P<after>\@.*)$',
- diff -r ansible/ansible/lib/ansible/module_utils/facts.py ansible-1.6.6/lib/ansible/module_utils/facts.py
- 112,113d111
- < { 'path' : '/usr/sbin/pkgadd', 'name' : 'svr4pkg' },
- < { 'path' : '/usr/bin/pkg', 'name' : 'pkg' },
- 269c267
- < if os.path.exists(path) and os.path.getsize(path) > 0:
- ---
- > if os.path.exists(path):
- 742,744c740
- < part['sectorsize'] = get_file_content(part_sysdir + "/queue/physical_block_size")
- < if not part['sectorsize']:
- < part['sectorsize'] = get_file_content(part_sysdir + "/queue/hw_sector_size",512)
- ---
- > part['sectorsize'] = get_file_content(part_sysdir + "/queue/hw_sector_size",512)
- 759c755
- < d['sectorsize'] = get_file_content(sysdir + "/queue/physical_block_size")
- ---
- > d['sectorsize'] = get_file_content(sysdir + "/queue/hw_sector_size")
- 761c757
- < d['sectorsize'] = get_file_content(sysdir + "/queue/hw_sector_size",512)
- ---
- > d['sectorsize'] = 512
- 1299,1310c1295,1297
- < try:
- < rc, out, err = module.run_command("grep Physical /var/adm/syslog/syslog.log")
- < data = re.search('.*Physical: ([0-9]*) Kbytes.*',out).groups()[0].strip()
- < self.facts['memtotal_mb'] = int(data) / 1024
- < except AttributeError:
- < #For systems where memory details aren't sent to syslog or the log has rotated, use parsed
- < #adb output. Unfortunatley /dev/kmem doesn't have world-read, so this only works as root.
- < if os.access("/dev/kmem", os.R_OK):
- < rc, out, err = module.run_command("echo 'phys_mem_pages/D' | adb -k /stand/vmunix /dev/kmem | tail -1 | awk '{print $2}'", use_unsafe_shell=True)
- < if not err:
- < data = out
- < self.facts['memtotal_mb'] = int(data) / 256
- ---
- > rc, out, err = module.run_command("grep Physical /var/adm/syslog/syslog.log")
- > data = re.search('.*Physical: ([0-9]*) Kbytes.*',out).groups()[0].strip()
- > self.facts['memtotal_mb'] = int(data) / 1024
- 1327,1329d1313
- < separator = ':'
- < if self.facts['distribution_version'] == "B.11.23":
- < separator = '='
- 1331c1315
- < self.facts['firmware_version'] = out.split(separator)[1].strip()
- ---
- > self.facts['firmware_version'] = out.split(':')[1].strip()
- diff -r ansible/ansible/lib/ansible/module_utils/known_hosts.py ansible-1.6.6/lib/ansible/module_utils/known_hosts.py
- 30,36c30
- < import urlparse
- <
- < try:
- < from hashlib import sha1
- < except ImportError:
- < import sha as sha1
- <
- ---
- > from hashlib import sha1
- 53c47
- < module.fail_json(msg="%s has an unknown hostkey. Set accept_hostkey to True or manually add the hostkey prior to running the git module" % fqdn)
- ---
- > module.fail_json(msg="%s has an unknown hostkey. Set accept_hostkey to True or manually add the hostkey prior to running the git module" % fqdn)
- 60,61c54
- < if "@" in repo_url and "://" not in repo_url:
- < # most likely a git@ or ssh+git@ type URL
- ---
- > if "@" in repo_url and not repo_url.startswith("http"):
- 69,80d61
- < elif "://" in repo_url:
- < # this should be something we can parse with urlparse
- < parts = urlparse.urlparse(repo_url)
- < if 'ssh' not in parts[0] and 'git' not in parts[0]:
- < # don't try and scan a hostname that's not ssh
- < return None
- < if parts[1] != '':
- < result = parts[1]
- < if ":" in result:
- < result = result.split(":")[0]
- < if "@" in result:
- < result = result.split("@", 1)[1]
- Only in ansible/ansible/lib/ansible/module_utils: powershell.ps1
- diff -r ansible/ansible/lib/ansible/playbook/__init__.py ansible-1.6.6/lib/ansible/playbook/__init__.py
- 167,170d166
- <
- < # let inventory know the playbook basedir so it can load more vars
- < self.inventory.set_playbook_basedir(self.basedir)
- <
- 329,331c325,326
- <
- < ansible.callbacks.set_play(self.callbacks, None)
- < ansible.callbacks.set_play(self.runner_callbacks, None)
- ---
- > ansible.callbacks.set_play(self.callbacks, None)
- > ansible.callbacks.set_play(self.runner_callbacks, None)
- diff -r ansible/ansible/lib/ansible/playbook/play.py ansible-1.6.6/lib/ansible/playbook/play.py
- 47c47
- < 'any_errors_fatal', 'roles', 'role_names', 'pre_tasks', 'post_tasks', 'max_fail_percentage',
- ---
- > 'any_errors_fatal', 'roles', 'pre_tasks', 'post_tasks', 'max_fail_percentage',
- 334a335
- >
- diff -r ansible/ansible/lib/ansible/runner/action_plugins/assemble.py ansible-1.6.6/lib/ansible/runner/action_plugins/assemble.py
- 122c122
- < self.runner._remote_chmod(conn, 'a+r', xfered, tmp)
- ---
- > self.runner._low_level_exec_command(conn, "chmod a+r %s" % xfered, tmp)
- diff -r ansible/ansible/lib/ansible/runner/action_plugins/assert.py ansible-1.6.6/lib/ansible/runner/action_plugins/assert.py
- 41c41,42
- < msg = None
- ---
- > msg = ''
- >
- 52,61c53,55
- < test_result = utils.check_conditional(that, self.runner.basedir, inject, fail_on_undefined=True)
- < if not test_result:
- < result = dict(
- < failed = True,
- < evaluated_to = test_result,
- < assertion = that,
- < )
- < if msg:
- < result['msg'] = msg
- < return ReturnData(conn=conn, result=result)
- ---
- > result = utils.check_conditional(that, self.runner.basedir, inject, fail_on_undefined=True)
- > if not result:
- > return ReturnData(conn=conn, result=dict(failed=True, assertion=that, evaluated_to=result))
- 64d57
- <
- diff -r ansible/ansible/lib/ansible/runner/action_plugins/async.py ansible-1.6.6/lib/ansible/runner/action_plugins/async.py
- 40c40
- < self.runner._remote_chmod(conn, 'a+rx', module_path, tmp)
- ---
- > self.runner._low_level_exec_command(conn, "chmod a+rx %s" % module_path, tmp)
- diff -r ansible/ansible/lib/ansible/runner/action_plugins/copy.py ansible-1.6.6/lib/ansible/runner/action_plugins/copy.py
- 139,140c139,140
- < if not conn.shell.path_has_trailing_slash(dest):
- < dest = conn.shell.join_path(dest, '')
- ---
- > if not dest.endswith("/"):
- > dest += "/"
- 172,173c172,173
- < if conn.shell.path_has_trailing_slash(dest):
- < dest_file = conn.shell.join_path(dest, source_rel)
- ---
- > if dest.endswith("/"):
- > dest_file = os.path.join(dest, source_rel)
- 175c175
- < dest_file = conn.shell.join_path(dest)
- ---
- > dest_file = dest
- 189c189
- < dest_file = conn.shell.join_path(dest, source_rel)
- ---
- > dest_file = os.path.join(dest, source_rel)
- 231c231
- < self.runner._remote_chmod(conn, 'a+r', tmp_src, tmp_path)
- ---
- > self.runner._low_level_exec_command(conn, "chmod a+r %s" % tmp_src, tmp_path)
- diff -r ansible/ansible/lib/ansible/runner/action_plugins/fetch.py ansible-1.6.6/lib/ansible/runner/action_plugins/fetch.py
- 60,64d59
- < source = conn.shell.join_path(source)
- < if os.path.sep not in conn.shell.join_path('a', ''):
- < source_local = source.replace('\\', '/')
- < else:
- < source_local = source
- 70c65
- < base = os.path.basename(source_local)
- ---
- > base = os.path.basename(source)
- 77c72
- < dest = "%s/%s/%s" % (utils.path_dwim(self.runner.basedir, dest), conn.host, source_local)
- ---
- > dest = "%s/%s/%s" % (utils.path_dwim(self.runner.basedir, dest), conn.host, source)
- diff -r ansible/ansible/lib/ansible/runner/action_plugins/include_vars.py ansible-1.6.6/lib/ansible/runner/action_plugins/include_vars.py
- 47c47
- < if data and type(data) != dict:
- ---
- > if type(data) != dict:
- 49,50d48
- < elif data is None:
- < data = {}
- diff -r ansible/ansible/lib/ansible/runner/action_plugins/pause.py ansible-1.6.6/lib/ansible/runner/action_plugins/pause.py
- 104c104
- < self.result['user_input'] = raw_input(self.prompt.encode(sys.stdout.encoding))
- ---
- > self.result['user_input'] = raw_input(self.prompt)
- diff -r ansible/ansible/lib/ansible/runner/action_plugins/script.py ansible-1.6.6/lib/ansible/runner/action_plugins/script.py
- 109c109
- < tmp_src = conn.shell.join_path(tmp, os.path.basename(source))
- ---
- > tmp_src = os.path.join(tmp, os.path.basename(source))
- 118c118
- < chmod_mode = 'a+rx'
- ---
- > cmd_args_chmod = "chmod a+rx %s" % tmp_src
- 121,122c121,122
- < chmod_mode = '+rx'
- < self.runner._remote_chmod(conn, chmod_mode, tmp_src, tmp, sudoable=sudoable, su=self.runner.su)
- ---
- > cmd_args_chmod = "chmod +rx %s" % tmp_src
- > self.runner._low_level_exec_command(conn, cmd_args_chmod, tmp, sudoable=sudoable, su=self.runner.su)
- 125,126c125,126
- < env_string = self.runner._compute_environment_string(conn, inject)
- < module_args = ' '.join([env_string, tmp_src, args])
- ---
- > env_string = self.runner._compute_environment_string(inject)
- > module_args = env_string + tmp_src + ' ' + args
- 133c133
- < self.runner._remove_tmp_path(conn, tmp)
- ---
- > self.runner._low_level_exec_command(conn, 'rm -rf %s >/dev/null 2>&1' % tmp, tmp)
- diff -r ansible/ansible/lib/ansible/runner/action_plugins/template.py ansible-1.6.6/lib/ansible/runner/action_plugins/template.py
- 82c82
- < if dest.endswith("/"): # CCTODO: Fix path for Windows hosts.
- ---
- > if dest.endswith("/"):
- 90c90
- < result = dict(failed=True, msg=type(e).__name__ + ": " + str(e))
- ---
- > result = dict(failed=True, msg=str(e))
- 117c117
- < self.runner._remote_chmod(conn, 'a+r', xfered, tmp)
- ---
- > self.runner._low_level_exec_command(conn, "chmod a+r %s" % xfered, tmp)
- diff -r ansible/ansible/lib/ansible/runner/action_plugins/unarchive.py ansible-1.6.6/lib/ansible/runner/action_plugins/unarchive.py
- 57c57
- < dest = os.path.expanduser(dest) # CCTODO: Fix path for Windows hosts.
- ---
- > dest = os.path.expanduser(dest)
- 80c80
- < self.runner._remote_chmod(conn, 'a+r', tmp_src, tmp)
- ---
- > self.runner._low_level_exec_command(conn, "chmod a+r %s" % tmp_src, tmp)
- diff -r ansible/ansible/lib/ansible/runner/connection_plugins/libvirt_lxc.py ansible-1.6.6/lib/ansible/runner/connection_plugins/libvirt_lxc.py
- 68c68
- < def exec_command(self, cmd, tmp_path, sudo_user, sudoable=False, executable='/bin/sh', in_data=None, su=None, su_user=None):
- ---
- > def exec_command(self, cmd, tmp_path, sudo_user, sudoable=False, executable='/bin/sh'):
- 70,75d69
- <
- < if su or su_user:
- < raise errors.AnsibleError("Internal Error: this module does not support running commands via su")
- <
- < if in_data:
- < raise errors.AnsibleError("Internal Error: this module does not support optimized module pipelining")
- diff -r ansible/ansible/lib/ansible/runner/connection_plugins/local.py ansible-1.6.6/lib/ansible/runner/connection_plugins/local.py
- 56c56
- < local_cmd = executable.split() + ['-c', cmd]
- ---
- > local_cmd = [executable, '-c', cmd]
- 61d60
- < executable = executable.split()[0] if executable else None
- 65c64
- < cwd=self.runner.basedir, executable=executable,
- ---
- > cwd=self.runner.basedir, executable=executable or None,
- diff -r ansible/ansible/lib/ansible/runner/connection_plugins/paramiko_ssh.py ansible-1.6.6/lib/ansible/runner/connection_plugins/paramiko_ssh.py
- 34d33
- < import re
- 189d187
- < self.ssh.get_transport().set_keepalive(5)
- 215d212
- < prompt_re = re.compile(prompt)
- 221,225c218
- < while True:
- < if success_key in sudo_output or \
- < (self.runner.sudo_pass and sudo_output.endswith(prompt)) or \
- < (self.runner.su_pass and prompt_re.match(sudo_output)):
- < break
- ---
- > while not sudo_output.endswith(prompt) and success_key not in sudo_output:
- diff -r ansible/ansible/lib/ansible/runner/connection_plugins/ssh.py ansible-1.6.6/lib/ansible/runner/connection_plugins/ssh.py
- 20d19
- < import re
- 45c44
- < self.user = str(user)
- ---
- > self.user = user
- 87c86
- < self.common_args += ["-o", "IdentityFile=\"%s\"" % os.path.expanduser(self.private_key_file)]
- ---
- > self.common_args += ["-o", "IdentityFile="+os.path.expanduser(self.private_key_file)]
- 89c88
- < self.common_args += ["-o", "IdentityFile=\"%s\"" % os.path.expanduser(self.runner.private_key_file)]
- ---
- > self.common_args += ["-o", "IdentityFile="+os.path.expanduser(self.runner.private_key_file)]
- 97c96,97
- < self.common_args += ["-o", "User=" + (self.user or pwd.getpwuid(os.geteuid())[0])]
- ---
- > if self.user != pwd.getpwuid(os.geteuid())[0]:
- > self.common_args += ["-o", "User="+self.user]
- 160,169c160,164
- < if self.runner.sudo and sudoable:
- < if self.runner.sudo_pass:
- < incorrect_password = gettext.dgettext(
- < "sudo", "Sorry, try again.")
- < if stdout.endswith("%s\r\n%s" % (incorrect_password,
- < prompt)):
- < raise errors.AnsibleError('Incorrect sudo password')
- <
- < if stdout.endswith(prompt):
- < raise errors.AnsibleError('Missing sudo password')
- ---
- > if self.runner.sudo and sudoable and self.runner.sudo_pass:
- > incorrect_password = gettext.dgettext(
- > "sudo", "Sorry, try again.")
- > if stdout.endswith("%s\r\n%s" % (incorrect_password, prompt)):
- > raise errors.AnsibleError('Incorrect sudo password')
- 222,230c217,219
- < try:
- < host_fh = open(hf)
- < except IOError, e:
- < hfiles_not_found += 1
- < continue
- < else:
- < data = host_fh.read()
- < host_fh.close()
- <
- ---
- > host_fh = open(hf)
- > data = host_fh.read()
- > host_fh.close()
- 277d265
- < prompt_re = re.compile(prompt)
- 318,323c306
- < while True:
- < if success_key in sudo_output or \
- < (self.runner.sudo_pass and sudo_output.endswith(prompt)) or \
- < (self.runner.su_pass and prompt_re.match(sudo_output)):
- < break
- <
- ---
- > while not sudo_output.endswith(prompt) and success_key not in sudo_output:
- Only in ansible/ansible/lib/ansible/runner/connection_plugins: winrm.py
- diff -r ansible/ansible/lib/ansible/runner/connection.py ansible-1.6.6/lib/ansible/runner/connection.py
- 22a23
- > import ansible.constants as C
- 24c25,28
- < class Connector(object):
- ---
- > import os
- > import os.path
- >
- > class Connection(object):
- 30a35
- > conn = None
- 35a41,42
- >
- >
- diff -r ansible/ansible/lib/ansible/runner/filter_plugins/core.py ansible-1.6.6/lib/ansible/runner/filter_plugins/core.py
- 31d30
- < from jinja2.filters import environmentfilter
- 136,139d134
- <
- < if not isinstance(value, basestring):
- < value = str(value)
- <
- 189,190c184
- < @environmentfilter
- < def rand(environment, end, start=None, step=None):
- ---
- > def rand(end, start=None, step=None):
- 229d222
- < 'relpath': os.path.relpath,
- diff -r ansible/ansible/lib/ansible/runner/__init__.py ansible-1.6.6/lib/ansible/runner/__init__.py
- 170c170
- < self.connector = connection.Connector(self)
- ---
- > self.connector = connection.Connection(self)
- 278c278
- < remote = conn.shell.join_path(tmp, name)
- ---
- > remote = os.path.join(tmp, name)
- 287c287
- < def _compute_environment_string(self, conn, inject=None):
- ---
- > def _compute_environment_string(self, inject=None):
- 290c290,294
- < enviro = {}
- ---
- > default_environment = dict(
- > LANG = C.DEFAULT_MODULE_LANG,
- > LC_CTYPE = C.DEFAULT_MODULE_LANG,
- > )
- >
- 295a300
- > default_environment.update(enviro)
- 297c302,305
- < return conn.shell.env_prefix(**enviro)
- ---
- > result = ""
- > for (k,v) in default_environment.iteritems():
- > result = "%s=%s %s" % (k, pipes.quote(unicode(v)), result)
- > return result
- 413c421
- < remote_module_path = conn.shell.join_path(tmp, module_name)
- ---
- > remote_module_path = os.path.join(tmp, module_name)
- 423c431
- < environment_string = self._compute_environment_string(conn, inject)
- ---
- > environment_string = self._compute_environment_string(inject)
- 427c435,436
- < self._remote_chmod(conn, 'a+r', remote_module_path, tmp)
- ---
- > cmd_chmod = "chmod a+r %s" % remote_module_path
- > self._low_level_exec_command(conn, cmd_chmod, tmp, sudoable=False)
- 455c464,465
- < self._remote_chmod(conn, 'a+r', argsfile, tmp)
- ---
- > cmd_args_chmod = "chmod a+r %s" % argsfile
- > self._low_level_exec_command(conn, cmd_args_chmod, tmp, sudoable=False)
- 473c483,486
- < rm_tmp = None
- ---
- >
- > cmd = " ".join([environment_string.strip(), shebang.replace("#!","").strip(), cmd])
- > cmd = cmd.strip()
- >
- 477,480c490
- < rm_tmp = tmp
- <
- < cmd = conn.shell.build_module_command(environment_string, shebang, cmd, rm_tmp)
- < cmd = cmd.strip()
- ---
- > cmd = cmd + "; rm -rf %s >/dev/null 2>&1" % tmp
- 497c507
- < cmd2 = conn.shell.remove(tmp, recurse=True)
- ---
- > cmd2 = "rm -rf %s >/dev/null 2>&1" % tmp
- 762c772,773
- < actual_port = inject.get('ansible_ssh_port', port)
- ---
- > if actual_transport in [ 'paramiko', 'ssh', 'accelerate' ]:
- > actual_port = inject.get('ansible_ssh_port', port)
- 803,814d813
- < default_shell = getattr(conn, 'default_shell', '')
- < shell_type = inject.get('ansible_shell_type')
- < if not shell_type:
- < if default_shell:
- < shell_type = default_shell
- < else:
- < shell_type = os.path.basename(C.DEFAULT_EXECUTABLE)
- <
- < shell_plugin = utils.plugins.shell_loader.get(shell_type)
- < if shell_plugin is None:
- < shell_plugin = utils.plugins.shell_loader.get('sh')
- < conn.shell = shell_plugin
- 944,947d942
- < if not cmd:
- < # this can happen with powershell modules when there is no analog to a Windows command (like chmod)
- < return dict(stdout='', stderr='')
- <
- 955,959c950,959
- < # assume connection type is local if no user attribute
- < this_user = getattr(conn, 'user', getpass.getuser())
- < if (not su and this_user == sudo_user) or (su and this_user == su_user):
- < sudoable = False
- < su = False
- ---
- > if hasattr(conn, 'user'):
- > if (not su and conn.user == sudo_user) or (su and conn.user == su_user):
- > sudoable = False
- > su = False
- > else:
- > # assume connection type is local if no user attribute
- > this_user = getpass.getuser()
- > if (not su and this_user == sudo_user) or (su and this_user == su_user):
- > sudoable = False
- > su = False
- 993,999d992
- < def _remote_chmod(self, conn, mode, path, tmp, sudoable=False, su=False):
- < ''' issue a remote chmod command '''
- < cmd = conn.shell.chmod(mode, path)
- < return self._low_level_exec_command(conn, cmd, tmp, sudoable=sudoable, su=su)
- <
- < # *****************************************************
- <
- 1002c995,1012
- < cmd = conn.shell.md5(path)
- ---
- >
- > path = pipes.quote(path)
- > # The following test needs to be SH-compliant. BASH-isms will
- > # not work if /bin/sh points to a non-BASH shell.
- > test = "rc=0; [ -r \"%s\" ] || rc=2; [ -f \"%s\" ] || rc=1; [ -d \"%s\" ] && echo 3 && exit 0" % ((path,) * 3)
- > md5s = [
- > "(/usr/bin/md5sum %s 2>/dev/null)" % path, # Linux
- > "(/sbin/md5sum -q %s 2>/dev/null)" % path, # ?
- > "(/usr/bin/digest -a md5 %s 2>/dev/null)" % path, # Solaris 10+
- > "(/sbin/md5 -q %s 2>/dev/null)" % path, # Freebsd
- > "(/usr/bin/md5 -n %s 2>/dev/null)" % path, # Netbsd
- > "(/bin/md5 -q %s 2>/dev/null)" % path, # Openbsd
- > "(/usr/bin/csum -h MD5 %s 2>/dev/null)" % path, # AIX
- > "(/bin/csum -h MD5 %s 2>/dev/null)" % path # AIX also
- > ]
- >
- > cmd = " || ".join(md5s)
- > cmd = "%s; %s || (echo \"${rc} %s\")" % (test, cmd, path)
- 1024a1035
- >
- 1026,1028c1037,1039
- < use_system_tmp = False
- < if (self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root'):
- < use_system_tmp = True
- ---
- > basetmp = os.path.join(C.DEFAULT_REMOTE_TMP, basefile)
- > if (self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root') and basetmp.startswith('$HOME'):
- > basetmp = os.path.join('/tmp', basefile)
- 1030c1041
- < tmp_mode = None
- ---
- > cmd = 'mkdir -p %s' % basetmp
- 1032c1043,1044
- < tmp_mode = 'a+rx'
- ---
- > cmd += ' && chmod a+rx %s' % basetmp
- > cmd += ' && echo %s' % basetmp
- 1034d1045
- < cmd = conn.shell.mkdtemp(basefile, use_system_tmp, tmp_mode)
- 1052c1063
- < rc = conn.shell.join_path(utils.last_non_blank_line(result['stdout']).strip(), '')
- ---
- > rc = utils.last_non_blank_line(result['stdout']).strip() + '/'
- 1062a1074
- >
- 1064c1076
- < cmd = conn.shell.remove(tmp_path, recurse=True)
- ---
- > cmd = "rm -rf %s >/dev/null 2>&1" % tmp_path
- 1078c1090
- < module_remote_path = conn.shell.join_path(tmp, module_name)
- ---
- > module_remote_path = os.path.join(tmp, module_name)
- 1090,1091c1102
- < module_suffixes = getattr(conn, 'default_suffixes', None)
- < module_path = utils.plugins.module_finder.find_plugin(module_name, module_suffixes)
- ---
- > module_path = utils.plugins.module_finder.find_plugin(module_name)
- diff -r ansible/ansible/lib/ansible/runner/lookup_plugins/file.py ansible-1.6.6/lib/ansible/runner/lookup_plugins/file.py
- 38,40c38,40
- < basedir_path = utils.path_dwim(self.basedir, term)
- < relative_path = None
- < playbook_path = None
- ---
- > path = utils.path_dwim(self.basedir, term)
- > if not os.path.exists(path):
- > raise errors.AnsibleError("%s does not exist" % path)
- 42,50c42
- < # Special handling of the file lookup, used primarily when the
- < # lookup is done from a role. If the file isn't found in the
- < # basedir of the current file, use dwim_relative to look in the
- < # role/files/ directory, and finally the playbook directory
- < # itself (which will be relative to the current working dir)
- < if '_original_file' in inject:
- < relative_path = utils.path_dwim_relative(inject['_original_file'], 'files', term, self.basedir, check=False)
- < if 'playbook_dir' in inject:
- < playbook_path = os.path.join(inject['playbook_dir'], term)
- ---
- > ret.append(codecs.open(path, encoding="utf8").read().rstrip())
- 52,57d43
- < for path in (basedir_path, relative_path, playbook_path):
- < if path and os.path.exists(path):
- < ret.append(codecs.open(path, encoding="utf8").read().rstrip())
- < break
- < else:
- < raise errors.AnsibleError("could not locate file in lookup: %s" % term)
- Only in ansible/ansible/lib/ansible/runner: shell_plugins
- diff -r ansible/ansible/lib/ansible/utils/__init__.py ansible-1.6.6/lib/ansible/utils/__init__.py
- 1c1
- < # (c) 2012-2014, Michael DeHaan <[email protected]>
- ---
- > # (c) 2012, Michael DeHaan <[email protected]>
- 18d17
- < import errno
- 542a542
- > msg = process_common_errors(msg, probline, mark.column)
- 559a560
- > msg = process_common_errors(msg, probline, mark.column)
- 624c625
- < result = {}
- ---
- > result = copy.deepcopy(a)
- 626,636c627,636
- < for dicts in a, b:
- < # next, iterate over b keys and values
- < for k, v in dicts.iteritems():
- < # if there's already such key in a
- < # and that key contains dict
- < if k in result and isinstance(result[k], dict):
- < # merge those dicts recursively
- < result[k] = merge_hash(a[k], v)
- < else:
- < # otherwise, just copy a value from b to a
- < result[k] = v
- ---
- > # next, iterate over b keys and values
- > for k, v in b.iteritems():
- > # if there's already such key in a
- > # and that key contains dict
- > if k in result and isinstance(result[k], dict):
- > # merge those dicts recursively
- > result[k] = merge_hash(a[k], v)
- > else:
- > # otherwise, just copy a value from b to a
- > result[k] = v
- 651c651
- < ''' Return MD5 hex digest of local file, None if file is not present or a directory. '''
- ---
- > ''' Return MD5 hex digest of local file, or None if file is not present. '''
- 653c653
- < if not os.path.exists(filename) or os.path.isdir(filename):
- ---
- > if not os.path.exists(filename):
- 993c993
- < prompt = '[Pp]assword: ?$'
- ---
- > prompt = 'assword: '
- 995c995
- < sudocmd = '%s %s %s -c "%s -c %s"' % (
- ---
- > sudocmd = '%s %s %s %s -c %s' % (
- 1213,1222d1212
- < def load_vars(basepath, results, vault_password=None):
- < """
- < Load variables from any potential yaml filename combinations of basepath,
- < returning result.
- < """
- <
- < paths_to_check = [ "".join([basepath, ext])
- < for ext in C.YAML_FILENAME_EXTENSIONS ]
- <
- < found_paths = []
- 1224,1322d1213
- < for path in paths_to_check:
- < found, results = _load_vars_from_path(path, results, vault_password=vault_password)
- < if found:
- < found_paths.append(path)
- <
- <
- < # disallow the potentially confusing situation that there are multiple
- < # variable files for the same name. For example if both group_vars/all.yml
- < # and group_vars/all.yaml
- < if len(found_paths) > 1:
- < raise errors.AnsibleError("Multiple variable files found. "
- < "There should only be one. %s" % ( found_paths, ))
- <
- < return results
- <
- < ## load variables from yaml files/dirs
- < # e.g. host/group_vars
- < #
- < def _load_vars_from_path(path, results, vault_password=None):
- < """
- < Robustly access the file at path and load variables, carefully reporting
- < errors in a friendly/informative way.
- <
- < Return the tuple (found, new_results, )
- < """
- <
- < try:
- < # in the case of a symbolic link, we want the stat of the link itself,
- < # not its target
- < pathstat = os.lstat(path)
- < except os.error, err:
- < # most common case is that nothing exists at that path.
- < if err.errno == errno.ENOENT:
- < return False, results
- < # otherwise this is a condition we should report to the user
- < raise errors.AnsibleError(
- < "%s is not accessible: %s."
- < " Please check its permissions." % ( path, err.strerror))
- <
- < # symbolic link
- < if stat.S_ISLNK(pathstat.st_mode):
- < try:
- < target = os.path.realpath(path)
- < except os.error, err2:
- < raise errors.AnsibleError("The symbolic link at %s "
- < "is not readable: %s. Please check its permissions."
- < % (path, err2.strerror, ))
- < # follow symbolic link chains by recursing, so we repeat the same
- < # permissions checks above and provide useful errors.
- < return _load_vars_from_path(target, results)
- <
- < # directory
- < if stat.S_ISDIR(pathstat.st_mode):
- <
- < # support organizing variables across multiple files in a directory
- < return True, _load_vars_from_folder(path, results, vault_password=vault_password)
- <
- < # regular file
- < elif stat.S_ISREG(pathstat.st_mode):
- < data = parse_yaml_from_file(path, vault_password=vault_password)
- < if data and type(data) != dict:
- < raise errors.AnsibleError(
- < "%s must be stored as a dictionary/hash" % path)
- < elif data is None:
- < data = {}
- <
- < # combine vars overrides by default but can be configured to do a
- < # hash merge in settings
- < results = combine_vars(results, data)
- < return True, results
- <
- < # something else? could be a fifo, socket, device, etc.
- < else:
- < raise errors.AnsibleError("Expected a variable file or directory "
- < "but found a non-file object at path %s" % (path, ))
- <
- < def _load_vars_from_folder(folder_path, results, vault_password=None):
- < """
- < Load all variables within a folder recursively.
- < """
- <
- < # this function and _load_vars_from_path are mutually recursive
- <
- < try:
- < names = os.listdir(folder_path)
- < except os.error, err:
- < raise errors.AnsibleError(
- < "This folder cannot be listed: %s: %s."
- < % ( folder_path, err.strerror))
- <
- < # evaluate files in a stable order rather than whatever order the
- < # filesystem lists them.
- < names.sort()
- <
- < # do not parse hidden files or dirs, e.g. .svn/
- < paths = [os.path.join(folder_path, name) for name in names if not name.startswith('.')]
- < for path in paths:
- < _found, results = _load_vars_from_path(path, results, vault_password=vault_password)
- < return results
- diff -r ansible/ansible/lib/ansible/utils/module_docs_fragments/files.py ansible-1.6.6/lib/ansible/utils/module_docs_fragments/files.py
- 23a24
- > options:
- 33,40c34,41
- < do not exist, since 1.7 they will be created with the supplied permissions.
- < If C(file), the file will NOT be created if it does not exist, see the M(copy)
- < or M(template) module if you want that behavior. If C(link), the symbolic
- < link will be created or changed. Use C(hard) for hardlinks. If C(absent),
- < directories will be recursively deleted, and files or symlinks will be unlinked.
- < If C(touch) (new in 1.4), an empty file will be created if the c(path) does not
- < exist, while an existing file or directory will receive updated file access and
- < modification times (similar to the way `touch` works from the command line).
- ---
- > do not exist. If C(file), the file will NOT be created if it does not
- > exist, see the M(copy) or M(template) module if you want that behavior.
- > If C(link), the symbolic link will be created or changed. Use C(hard)
- > for hardlinks. If C(absent), directories will be recursively deleted,
- > and files or symlinks will be unlinked. If C(touch) (new in 1.4), an empty file will
- > be created if the c(path) does not exist, while an existing file or
- > directory will receive updated file access and modification times (similar
- > to the way `touch` works from the command line).
- diff -r ansible/ansible/lib/ansible/utils/plugins.py ansible-1.6.6/lib/ansible/utils/plugins.py
- 142c142
- < def find_plugin(self, name, suffixes=None):
- ---
- > def find_plugin(self, name):
- 145,160c145,156
- < if not suffixes:
- < if self.class_name:
- < suffixes = ['.py']
- < else:
- < suffixes = ['', '.ps1']
- <
- < for suffix in suffixes:
- < full_name = '%s%s' % (name, suffix)
- < if full_name in self._plugin_path_cache:
- < return self._plugin_path_cache[full_name]
- <
- < for i in self._get_paths():
- < path = os.path.join(i, full_name)
- < if os.path.isfile(path):
- < self._plugin_path_cache[full_name] = path
- < return path
- ---
- > if name in self._plugin_path_cache:
- > return self._plugin_path_cache[name]
- >
- > suffix = ".py"
- > if not self.class_name:
- > suffix = ""
- >
- > for i in self._get_paths():
- > path = os.path.join(i, "%s%s" % (name, suffix))
- > if os.path.isfile(path):
- > self._plugin_path_cache[name] = path
- > return path
- 217,223d212
- < )
- <
- < shell_loader = PluginLoader(
- < 'ShellModule',
- < 'ansible.runner.shell_plugins',
- < 'shell_plugins',
- < 'shell_plugins',
- diff -r ansible/ansible/lib/ansible/utils/template.py ansible-1.6.6/lib/ansible/utils/template.py
- 83,84d82
- < JINJA2_OVERRIDE = '#jinja2:'
- < JINJA2_ALLOWED_OVERRIDES = ['trim_blocks', 'lstrip_blocks', 'newline_sequence', 'keep_trailing_newline']
- 95,97d92
- < except errors.AnsibleError:
- < # Plugin raised this on purpose
- < raise
- 235,246d229
- < # Get jinja env overrides from template
- < if data.startswith(JINJA2_OVERRIDE):
- < eol = data.find('\n')
- < line = data[len(JINJA2_OVERRIDE):eol]
- < data = data[eol+1:]
- < for pair in line.split(','):
- < (key,val) = pair.split(':')
- < key = key.strip()
- < if key in JINJA2_ALLOWED_OVERRIDES:
- < setattr(environment, key, ast.literal_eval(val.strip()))
- <
- <
- 287,296d269
- < except jinja2.exceptions.TemplateNotFound, e:
- < # Throw an exception which includes a more user friendly error message
- < # This likely will happen for included sub-template. Not that besides
- < # pure "file not found" it may happen due to Jinja2's "security"
- < # checks on path.
- < values = {'name': realpath, 'subname': str(e)}
- < msg = 'file: %(name)s, error: Cannot find/not allowed to load (include) template %(subname)s' % \
- < values
- < error = errors.AnsibleError(msg)
- < raise error
- Only in ansible/ansible/library/cloud: azure
- diff -r ansible/ansible/library/cloud/digital_ocean_domain ansible-1.6.6/library/cloud/digital_ocean_domain
- 62c62
- < - digital_ocean: >
- ---
- > - digital_cean_droplet: >
- 72,73c72,73
- < name={{ test_droplet.droplet.name }}.my.domain
- < ip={{ test_droplet.droplet.ip_address }}
- ---
- > name={{ test_droplet.name }}.my.domain
- > ip={{ test_droplet.ip_address }}
- diff -r ansible/ansible/library/cloud/docker ansible-1.6.6/library/cloud/docker
- 187c187
- < requirements: [ "docker-py >= 0.3.0", "docker >= 0.10.0" ]
- ---
- > requirements: [ "docker-py >= 0.3.0" ]
- 383c383
- < self.links = self.get_links(self.module.params.get('links'))
- ---
- > self.links = dict(map(lambda x: x.split(':'), self.module.params.get('links')))
- 394,409d393
- < def get_links(self, links):
- < """
- < Parse the links passed, if a link is specified without an alias then just create the alias of the same name as the link
- < """
- < processed_links = {}
- <
- < for link in links:
- < parsed_link = link.split(':', 1)
- < if(len(parsed_link) == 2):
- < processed_links[parsed_link[0]] = parsed_link[1]
- < else:
- < processed_links[parsed_link[0]] = parsed_link[0]
- <
- < return processed_links
- <
- <
- 468a453
- >
- diff -r ansible/ansible/library/cloud/docker_image ansible-1.6.6/library/cloud/docker_image
- 48c48
- < default: "latest"
- ---
- > default: ""
- 100c100
- < - name: remove image
- ---
- > - name: run tomcat servers
- 116,120d115
- < try:
- < from docker.errors import APIError as DockerAPIError
- < except ImportError:
- < from docker.client import APIError as DockerAPIError
- <
- 148,151c143
- < try:
- < chunk_json = json.loads(chunk)
- < except ValueError:
- < continue
- ---
- > chunk_json = json.loads(chunk)
- 164,169d155
- < # Just in case we skipped evaluating the JSON returned from build
- < # during every iteration, add an error if the image_id was never
- < # populated
- < if not image_id:
- < self.error_msg = 'Unknown error encountered'
- <
- 181c167
- < repotag = ':'.join([self.name, self.tag])
- ---
- > repotag = '%s:%s' % (getattr(self, 'name', ''), getattr(self, 'tag', 'latest'))
- 195c181
- < except DockerAPIError as e:
- ---
- > except docker.APIError as e:
- 205c191
- < tag = dict(required=False, default="latest"),
- ---
- > tag = dict(required=False, default=""),
- 236c222
- < msg = "Image built: %s" % image_id
- ---
- > msg = "Image builded: %s" % image_id
- 243c229
- < except DockerAPIError as e:
- ---
- > except docker.client.APIError as e:
- diff -r ansible/ansible/library/cloud/ec2 ansible-1.6.6/library/cloud/ec2
- 178c178
- < - "list of instance ids, currently used for states: absent, running, stopped"
- ---
- > - list of instance ids, currently used for the states 'absent', 'running', and 'stopped'
- 573c573,574
- < 'hypervisor': inst.hypervisor}
- ---
- > 'hypervisor': inst.hypervisor,
- > 'ebs_optimized': inst.ebs_optimized}
- 579,583d579
- < try:
- < instance_info['ebs_optimized'] = getattr(inst, 'ebs_optimized')
- < except AttributeError:
- < instance_info['ebs_optimized'] = False
- <
- 919a916,921
- > if instance_tags:
- > try:
- > ec2.create_tags(instids, instance_tags)
- > except boto.exception.EC2ResponseError, e:
- > module.fail_json(msg = "Instance tagging failed => %s: %s" % (e.error_code, e.error_message))
- >
- 924,932c926
- < try:
- < res_list = ec2.get_all_instances(instids)
- < except boto.exception.BotoSeverError, e:
- < if e.error_code == 'InvalidInstanceID.NotFound':
- < time.sleep(1)
- < continue
- < else:
- < raise
- <
- ---
- > res_list = ec2.get_all_instances(instids)
- 959,965d952
- < # Leave this as late as possible to try and avoid InvalidInstanceID.NotFound
- < if instance_tags:
- < try:
- < ec2.create_tags(instids, instance_tags)
- < except boto.exception.EC2ResponseError, e:
- < module.fail_json(msg = "Instance tagging failed => %s: %s" % (e.error_code, e.error_message))
- <
- 1006c993
- < if inst.state == 'running' or inst.state == 'stopped':
- ---
- > if inst.state == 'running':
- diff -r ansible/ansible/library/cloud/ec2_asg ansible-1.6.6/library/cloud/ec2_asg
- 70,87d69
- < tags:
- < description:
- < - List of tag dictionaries to use. Required keys are 'key', 'value'. Optional key is 'propagate_at_launch', which defaults to true.
- < required: false
- < default: None
- < version_added: "1.7"
- < health_check_period:
- < description:
- < - Length of time in seconds after a new EC2 instance comes into service that Auto Scaling starts checking its health.
- < required: false
- < default: 500 seconds
- < version_added: "1.7"
- < health_check_type:
- < description:
- < - The service you want the health status from, Amazon EC2 or Elastic Load Balancer.
- < required: false
- < default: EC2
- < version_added: "1.7"
- 101,105d82
- < tags:
- < - key: environment
- < value: production
- < propagate_at_launch: no
- <
- 116c93
- < from boto.ec2.autoscale import AutoScaleConnection, AutoScalingGroup, Tag
- ---
- > from boto.ec2.autoscale import AutoScaleConnection, AutoScalingGroup
- 122,123d98
- < ASG_ATTRIBUTES = ('launch_config_name', 'max_size', 'min_size', 'desired_capacity',
- < 'vpc_zone_identifier', 'availability_zones')
- 137,144d111
- < def get_properties(autoscaling_group):
- < properties = dict((attr, getattr(autoscaling_group, attr)) for attr in ASG_ATTRIBUTES)
- < if autoscaling_group.instances:
- < properties['instances'] = [i.instance_id for i in autoscaling_group.instances]
- < properties['load_balancers'] = autoscaling_group.load_balancers
- < return properties
- <
- <
- 145a113
- > enforce_required_arguments(module)
- 155,157c123,124
- < set_tags = module.params.get('tags')
- < health_check_period = module.params.get('health_check_period')
- < health_check_type = module.params.get('health_check_type')
- ---
- >
- > launch_configs = connection.get_all_launch_configurations(names=[launch_config_name])
- 167,173c134
- <
- < asg_tags = []
- < for tag in set_tags:
- < asg_tags.append(Tag(key=tag.get('key'),
- < value=tag.get('value'),
- < propagate_at_launch=bool(tag.get('propagate_at_launch', True)),
- < resource_id=group_name))
- ---
- > module.params['availability_zones'] = [zone.name for zone in ec2_connection.get_all_zones()]
- 176,179d136
- < if not vpc_zone_identifier and not availability_zones:
- < availability_zones = module.params['availability_zones'] = [zone.name for zone in ec2_connection.get_all_zones()]
- < enforce_required_arguments(module)
- < launch_configs = connection.get_all_launch_configurations(names=[launch_config_name])
- 189,192c146
- < connection=connection,
- < tags=asg_tags,
- < health_check_period=health_check_period,
- < health_check_type=health_check_type)
- ---
- > connection=connection)
- 196,197c150
- < asg_properties = get_properties(ag)
- < module.exit_json(changed=True, **asg_properties)
- ---
- > module.exit_json(changed=True)
- 203,204c156,158
- < for attr in ASG_ATTRIBUTES:
- < if module.params.get(attr) and getattr(as_group, attr) != module.params.get(attr):
- ---
- > for attr in ('launch_config_name', 'max_size', 'min_size', 'desired_capacity',
- > 'vpc_zone_identifier', 'availability_zones'):
- > if getattr(as_group, attr) != module.params.get(attr):
- 207,223d160
- <
- < if len(set_tags) > 0:
- < existing_tags = as_group.tags
- < existing_tag_map = dict((tag.key, tag) for tag in existing_tags)
- < for tag in set_tags:
- < if 'key' not in tag:
- < continue
- < if ( not tag['key'] in existing_tag_map or
- < existing_tag_map[tag['key']].value != tag['value'] or
- < ('propagate_at_launch' in tag and
- < existing_tag_map[tag['key']].propagate_at_launch != tag['propagate_at_launch']) ):
- <
- < changed = True
- < continue
- < if changed:
- < connection.create_or_update_tags(asg_tags)
- <
- 226c163
- < if load_balancers and as_group.load_balancers != load_balancers:
- ---
- > if as_group.load_balancers != load_balancers:
- 233,234c170
- < asg_properties = get_properties(as_group)
- < module.exit_json(changed=changed, **asg_properties)
- ---
- > module.exit_json(changed=changed)
- 238,255d173
- < result = as_groups[0]
- < module.exit_json(changed=changed, name=result.name,
- < autoscaling_group_arn=result.autoscaling_group_arn,
- < availability_zones=result.availability_zones,
- < created_time=str(result.created_time),
- < default_cooldown=result.default_cooldown,
- < health_check_period=result.health_check_period,
- < health_check_type=result.health_check_type,
- < instance_id=result.instance_id,
- < instances=[instance.instance_id for instance in result.instances],
- < launch_config_name=result.launch_config_name,
- < load_balancers=result.load_balancers,
- < min_size=result.min_size, max_size=result.max_size,
- < placement_group=result.placement_group,
- < tags=result.tags,
- < termination_policies=result.termination_policies,
- < vpc_zone_identifier=result.vpc_zone_identifier)
- <
- 266c184
- < groups = connection.get_all_groups()
- ---
- > connection.get_all_groups()
- 292,294d209
- < tags=dict(type='list', default=[]),
- < health_check_period=dict(type='int', default=300),
- < health_check_type=dict(default='EC2', choices=['EC2', 'ELB']),
- 304,305d218
- < if not connection:
- < module.fail_json(msg="failed to connect to AWS for the given region: %s" % str(region))
- diff -r ansible/ansible/library/cloud/ec2_elb_lb ansible-1.6.6/library/cloud/ec2_elb_lb
- 69,96d68
- < subnets:
- < description:
- < - A list of VPC subnets to use when creating ELB. Zones should be empty if using this.
- < required: false
- < default: None
- < aliases: []
- < version_added: "1.7"
- < purge_subnets:
- < description:
- < - Purge existing subnet on ELB that are not found in subnets
- < required: false
- < default: false
- < version_added: "1.7"
- < scheme:
- < description:
- < - The scheme to use when creating the ELB. For a private VPC-visible ELB use 'internal'.
- < required: false
- < default: 'internet-facing'
- < version_added: "1.7"
- < validate_certs:
- < description:
- < - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
- < required: false
- < default: "yes"
- < choices: ["yes", "no"]
- < aliases: []
- < version_added: "1.5"
- <
- 123,137d94
- <
- < # Basic VPC provisioning example
- < - local_action:
- < module: ec2_elb_lb
- < name: "test-vpc"
- < scheme: internal
- < state: present
- < subnets:
- < - subnet-abcd1234
- < - subnet-1a2b3c4d
- < listeners:
- < - protocol: http # options are http, https, ssl, tcp
- < load_balancer_port: 80
- < instance_port: 80
- <
- 195,208d151
- <
- < # Creates a ELB and assigns a list of subnets to it.
- < - local_action:
- < module: ec2_elb_lb
- < state: present
- < name: 'New ELB'
- < security_group_ids: 'sg-123456, sg-67890'
- < region: us-west-2
- < subnets: 'subnet-123456, subnet-67890'
- < purge_subnets: yes
- < listeners:
- < - protocol: http
- < load_balancer_port: 80
- < instance_port: 80
- 228,231c171,172
- < zones=None, purge_zones=None, security_group_ids=None,
- < health_check=None, subnets=None, purge_subnets=None,
- < scheme="internet-facing", region=None, **aws_connect_params):
- <
- ---
- > zones=None, purge_zones=None, security_group_ids=None, health_check=None,
- > region=None, **aws_connect_params):
- 240,242d180
- < self.subnets = subnets
- < self.purge_subnets = purge_subnets
- < self.scheme = scheme
- 261d198
- < self._set_subnets()
- 286,288c223
- < 'status': self.status,
- < 'subnets': self.subnets,
- < 'scheme': check_elb.scheme
- ---
- > 'status': self.status
- 322c257
- < return connect_to_aws(boto.ec2.elb, self.region,
- ---
- > return connect_to_aws(boto.ec2.elb, self.region,
- 339,341c274
- < complex_listeners=listeners,
- < subnets=self.subnets,
- < scheme=self.scheme)
- ---
- > complex_listeners=listeners)
- 460,482d392
- < def _attach_subnets(self, subnets):
- < self.elb_conn.attach_lb_to_subnets(self.name, subnets)
- < self.changed = True
- <
- < def _detach_subnets(self, subnets):
- < self.elb_conn.detach_lb_from_subnets(self.name, subnets)
- < self.changed = True
- <
- < def _set_subnets(self):
- < """Determine which subnets need to be attached or detached on the ELB"""
- < if self.subnets:
- < if self.purge_subnets:
- < subnets_to_detach = list(set(self.elb.subnets) - set(self.subnets))
- < subnets_to_attach = list(set(self.subnets) - set(self.elb.subnets))
- < else:
- < subnets_to_detach = None
- < subnets_to_attach = list(set(self.subnets) - set(self.elb.subnets))
- <
- < if subnets_to_attach:
- < self._attach_subnets(subnets_to_attach)
- < if subnets_to_detach:
- < self._detach_subnets(subnets_to_detach)
- <
- 485,487c395,396
- < if self.zones:
- < if self.purge_zones:
- < zones_to_disable = list(set(self.elb.availability_zones) -
- ---
- > if self.purge_zones:
- > zones_to_disable = list(set(self.elb.availability_zones) -
- 489,499c398,408
- < zones_to_enable = list(set(self.zones) -
- < set(self.elb.availability_zones))
- < else:
- < zones_to_disable = None
- < zones_to_enable = list(set(self.zones) -
- < set(self.elb.availability_zones))
- < if zones_to_enable:
- < self._enable_zones(zones_to_enable)
- < # N.B. This must come second, in case it would have removed all zones
- < if zones_to_disable:
- < self._disable_zones(zones_to_disable)
- ---
- > zones_to_enable = list(set(self.zones) -
- > set(self.elb.availability_zones))
- > else:
- > zones_to_disable = None
- > zones_to_enable = list(set(self.zones) -
- > set(self.elb.availability_zones))
- > if zones_to_enable:
- > self._enable_zones(zones_to_enable)
- > # N.B. This must come second, in case it would have removed all zones
- > if zones_to_disable:
- > self._disable_zones(zones_to_disable)
- 558,560d466
- < subnets={'default': None, 'required': False, 'type': 'list'},
- < purge_subnets={'default': False, 'required': False, 'type': 'bool'},
- < scheme={'default': 'internet-facing', 'required': False}
- 580,582d485
- < subnets = module.params['subnets']
- < purge_subnets = module.params['purge_subnets']
- < scheme = module.params['scheme']
- 587,588c490,491
- < if state == 'present' and not (zones or subnets):
- < module.fail_json(msg="At least one availability zone or subnet is required for ELB creation")
- ---
- > if state == 'present' and not zones:
- > module.fail_json(msg="At least one availability zone is required for ELB creation")
- 591,593c494,495
- < purge_zones, security_group_ids, health_check,
- < subnets, purge_subnets,
- < scheme, region=region, **aws_connect_params)
- ---
- > purge_zones, security_group_ids, health_check,
- > region=region, **aws_connect_params)
- diff -r ansible/ansible/library/cloud/ec2_group ansible-1.6.6/library/cloud/ec2_group
- 64,65c64,65
- < aws_secret_key: SECRET
- < aws_access_key: ACCESS
- ---
- > ec2_secret_key: SECRET
- > ec2_access_key: ACCESS
- 109c109
- < def get_target_from_rule(module, rule, name, group, groups):
- ---
- > def get_target_from_rule(rule, name, groups):
- 252c252
- < group_id, ip, target_group_created = get_target_from_rule(module, rule, name, group, groups)
- ---
- > group_id, ip, target_group_created = get_target_from_rule(rule, name, groups)
- 292c292
- < group_id, ip, target_group_created = get_target_from_rule(module, rule, name, group, groups)
- ---
- > group_id, ip, target_group_created = get_target_from_rule(rule, name, groups)
- diff -r ansible/ansible/library/cloud/ec2_lc ansible-1.6.6/library/cloud/ec2_lc
- 71,81d70
- < spot_price:
- < description:
- < - The spot price you are bidding. Only applies for an autoscaling group with spot instances.
- < required: false
- < default: null
- < instance_monitoring:
- < description:
- < - whether instances in group are launched with detailed monitoring.
- < required: false
- < default: false
- < aliases: []
- 140,141d128
- < spot_price = module.params.get('spot_price')
- < instance_monitoring = module.params.get('instance_monitoring')
- 160,162c147
- < instance_type=instance_type,
- < spot_price=spot_price,
- < instance_monitoring=instance_monitoring)
- ---
- > instance_type=instance_type)
- 202,203d186
- < spot_price=dict(type='float'),
- < instance_monitoring=dict(default=False, type='bool'),
- diff -r ansible/ansible/library/cloud/ec2_metric_alarm ansible-1.6.6/library/cloud/ec2_metric_alarm
- 56,57c56,57
- < threshold:
- < description:
- ---
- > threshold:
- > description:
- 68c68
- < unit:
- ---
- > unit:
- 82c82
- < description:
- ---
- > description:
- 132c132
- <
- ---
- >
- 149c149
- <
- ---
- >
- 154c154
- < metric=metric,
- ---
- > metric=metric,
- 168c168
- < try:
- ---
- > try:
- 170,171c170
- < changed = True
- < alarms = connection.describe_alarms(alarm_names=[name])
- ---
- > module.exit_json(changed=True)
- 190c189
- <
- ---
- >
- 196c195
- < setattr(alarm, 'dimensions', dim1)
- ---
- > setattr(alarm, 'dimensions', dim1)
- 198c197
- < for attr in ('alarm_actions','insufficient_data_actions','ok_actions'):
- ---
- > for attr in ('alarm_actions','insufficient_data_actions','ok_actions'):
- 203c202
- <
- ---
- >
- 206a206
- > module.exit_json(changed=changed)
- 209,228c209
- < result = alarms[0]
- < module.exit_json(changed=changed, name=result.name,
- < actions_enabled=result.actions_enabled,
- < alarm_actions=result.alarm_actions,
- < alarm_arn=result.alarm_arn,
- < comparison=result.comparison,
- < description=result.description,
- < dimensions=result.dimensions,
- < evaluation_periods=result.evaluation_periods,
- < insufficient_data_actions=result.insufficient_data_actions,
- < last_updated=result.last_updated,
- < metric=result.metric,
- < namespace=result.namespace,
- < ok_actions=result.ok_actions,
- < period=result.period,
- < state_reason=result.state_reason,
- < state_value=result.state_value,
- < statistic=result.statistic,
- < threshold=result.threshold,
- < unit=result.unit)
- ---
- >
- 234c215
- <
- ---
- >
- diff -r ansible/ansible/library/cloud/ec2_scaling_policy ansible-1.6.6/library/cloud/ec2_scaling_policy
- 26c26
- < desciption:
- ---
- > desciption:
- 63c63
- < try:
- ---
- > try:
- 80c80
- <
- ---
- >
- 82c82
- <
- ---
- >
- 94,95c94
- < policy = connection.get_all_policies(policy_names=[sp_name])[0]
- < module.exit_json(changed=True, name=policy.name, arn=policy.policy_arn, as_name=policy.as_name, scaling_adjustment=policy.scaling_adjustment, cooldown=policy.cooldown, adjustment_type=policy.adjustment_type, min_adjustment_step=policy.min_adjustment_step)
- ---
- > module.exit_json(changed=True)
- 102,103c101,102
- < # min_adjustment_step attribute is only relevant if the adjustment_type
- < # is set to percentage change in capacity, so it is a special case
- ---
- > #min_adjustment_step attribute is only relevant if the adjustment_type
- > #is set to percentage change in capacity, so it is a special case
- 107,109c106,107
- <
- < # set the min adjustment step incase the user decided to change their
- < # adjustment type to percentage
- ---
- >
- > #set the min adjustment step incase the user decided to change their adjustment type to percentage
- 112c110
- < # check the remaining attributes
- ---
- > #check the remaining attributes
- 122c120,121
- < module.exit_json(changed=changed, name=policy.name, arn=policy.policy_arn, as_name=policy.as_name, scaling_adjustment=policy.scaling_adjustment, cooldown=policy.cooldown, adjustment_type=policy.adjustment_type, min_adjustment_step=policy.min_adjustment_step)
- ---
- > module.exit_json(changed=changed, name=policy.name, arn=policy.policy_arn, as_name=policy.as_name, scaling_adjustment=policy.scaling_adjustment, cooldown=policy.cooldown, adjustment_type=policy.adjustment_type, min_adjustment_step=policy.min_adjustment_step)
- > module.exit_json(changed=changed)
- 155c154
- < )
- ---
- > )
- 157c156
- <
- ---
- >
- 161c160
- <
- ---
- >
- 166,167d164
- < if not connection:
- < module.fail_json(msg="failed to connect to AWS for the given region: %s" % str(region))
- 177a175,180
- >
- >
- >
- >
- >
- >
- diff -r ansible/ansible/library/cloud/ec2_snapshot ansible-1.6.6/library/cloud/ec2_snapshot
- 28a29
- > default: null
- 33a35,36
- > default: null
- > aliases: []
- 37a41,42
- > default: null
- > aliases: []
- 41a47,48
- > default: null
- > aliases: []
- 45a53,68
- > default: null
- > aliases: []
- > profile:
- > description:
- > - uses a boto profile. Only works with boto >= 2.24.0
- > required: false
- > default: null
- > aliases: []
- > version_added: "1.6"
- > security_token:
- > description:
- > - security token to authenticate against AWS
- > required: false
- > default: null
- > aliases: []
- > version_added: "1.6"
- 49a73,74
- > default: null
- > aliases: []
- 90,92c115,116
- < argument_spec = ec2_argument_spec()
- < argument_spec.update(
- < dict(
- ---
- > module = AnsibleModule(
- > argument_spec = dict(
- 96a121,124
- > region = dict(aliases=['aws_region', 'ec2_region'], choices=AWS_REGIONS),
- > ec2_url = dict(),
- > ec2_secret_key = dict(aliases=['aws_secret_key', 'secret_key'], no_log=True),
- > ec2_access_key = dict(aliases=['aws_access_key', 'access_key']),
- 102d129
- < module = AnsibleModule(argument_spec=argument_spec)
- diff -r ansible/ansible/library/cloud/ec2_vol ansible-1.6.6/library/cloud/ec2_vol
- 161c161
- < - local_action:
- ---
- > - location: action
- diff -r ansible/ansible/library/cloud/ec2_vpc ansible-1.6.6/library/cloud/ec2_vpc
- 19c19
- < module: ec2_vpc
- ---
- > module: ec2_vpc
- 61,62c61,62
- < - 'A dictionary array of resource tags of the form: { tag1: value1, tag2: value2 }. Tags in this list are used in conjunction with CIDR block to uniquely identify a VPC in lieu of vpc_id. Therefore, if CIDR/Tag combination does not exits, a new VPC will be created. VPC tags not on this list will be ignored. Prior to 1.7, specifying a resource tag was optional.'
- < required: true
- ---
- > - 'A dictionary array of resource tags of the form: { tag1: value1, tag2: value2 }. Tags in this list are used in conjunction with CIDR block to uniquely identify a VPC in lieu of vpc_id. Therefore, if CIDR/Tag combination does not exits, a new VPC will be created. VPC tags not on this list will be ignored.'
- > required: false
- 99c99
- < - region in which the resource exists.
- ---
- > - region in which the resource exists.
- 105c105
- < - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used.
- ---
- > - AWS secret key. If not set then the value of the AWS_SECRET_KEY environment variable is used.
- 146c146
- < subnets:
- ---
- > subnets:
- 158c158
- < - subnets:
- ---
- > - subnets:
- 161c161
- < routes:
- ---
- > routes:
- 176,177c176,177
- < vpc_id: vpc-aaaaaaa
- < region: us-west-2
- ---
- > vpc_id: vpc-aaaaaaa
- > region: us-west-2
- 218c218
- <
- ---
- >
- 221c221
- < msg='You must specify either a vpc_id or a cidr block + list of unique tags, aborting'
- ---
- > msg='You must specify either a vpc id or a cidr block + list of unique tags, aborting'
- 231c231
- <
- ---
- >
- 234c234
- <
- ---
- >
- 262c262
- < about the VPC and subnets that were launched
- ---
- > about the VPC and subnets that were launched
- 264,265c264,265
- <
- < id = module.params.get('vpc_id')
- ---
- >
- > id = module.params.get('id')
- 273d272
- < vpc_spec_tags = module.params.get('resource_tags')
- 278,282d276
- < if subnets is None:
- < subnets = []
- < if route_tables is None:
- < route_tables = []
- <
- 297,311c291,298
- < try:
- < pvpc = vpc_conn.get_all_vpcs(vpc.id)
- < if hasattr(pvpc, 'state'):
- < if pvpc.state == "available":
- < pending = False
- < elif hasattr(pvpc[0], 'state'):
- < if pvpc[0].state == "available":
- < pending = False
- < # sometimes vpc_conn.create_vpc() will return a vpc that can't be found yet by vpc_conn.get_all_vpcs()
- < # when that happens, just wait a bit longer and try again
- < except boto.exception.BotoServerError, e:
- < if e.error_code != 'InvalidVpcID.NotFound':
- < raise
- < if pending:
- < time.sleep(5)
- ---
- > pvpc = vpc_conn.get_all_vpcs(vpc.id)
- > if hasattr(pvpc, 'state'):
- > if pvpc.state == "available":
- > pending = False
- > elif hasattr(pvpc[0], 'state'):
- > if pvpc[0].state == "available":
- > pending = False
- > time.sleep(5)
- 321a309
- > vpc_spec_tags = module.params.get('resource_tags')
- 324c312
- < if not set(vpc_spec_tags.items()).issubset(set(vpc_tags.items())):
- ---
- > if vpc_spec_tags and not set(vpc_spec_tags.items()).issubset(set(vpc_tags.items())):
- 327c315
- < for (key, value) in set(vpc_spec_tags.items()):
- ---
- > for (key, value) in set(vpc_spec_tags.items()):
- 330c318
- <
- ---
- >
- 345c333
- <
- ---
- >
- 347c335
- <
- ---
- >
- 371c359
- <
- ---
- >
- 388c376
- < if len(igws) > 1:
- ---
- > if len(igws) > 1:
- 422,423c410,411
- <
- < # Work through each route table and update/create to match dictionary array
- ---
- >
- > # Work through each route table and update/create to match dictionary array
- 429,430c417,418
- < route_kwargs = {}
- < if route['gw'] == 'igw':
- ---
- > r_gateway = route['gw']
- > if r_gateway == 'igw':
- 436,441c424,425
- < route_kwargs['gateway_id'] = igw.id
- < elif route['gw'].startswith('i-'):
- < route_kwargs['instance_id'] = route['gw']
- < else:
- < route_kwargs['gateway_id'] = route['gw']
- < vpc_conn.create_route(new_rt.id, route['dest'], **route_kwargs)
- ---
- > r_gateway = igw.id
- > vpc_conn.create_route(new_rt.id, route['dest'], r_gateway)
- 473c457
- <
- ---
- >
- 500c484
- <
- ---
- >
- 504c488
- < 'cidr': sn.cidr_block,
- ---
- > 'cidr': sn.cidr_block,
- 530c514
- <
- ---
- >
- 532c516
- <
- ---
- >
- 578c562
- < subnets = dict(type='list', default=[]),
- ---
- > subnets = dict(type='list'),
- 581,582c565,566
- < resource_tags = dict(type='dict', required=True),
- < route_tables = dict(type='list', default=[]),
- ---
- > resource_tags = dict(type='dict'),
- > route_tables = dict(type='list'),
- 594c578
- <
- ---
- >
- 596c580
- < if region:
- ---
- > if region:
- 599c583
- < region,
- ---
- > region,
- 607c591
- <
- ---
- >
- diff -r ansible/ansible/library/cloud/glance_image ansible-1.6.6/library/cloud/glance_image
- 107,112d106
- < endpoint_type:
- < description:
- < - endpoint URL type
- < choices: [publicURL, internalURL]
- < required: false
- < default: publicURL
- 123c117
- < container_format=bare
- ---
- > container_format=bare
- 136d129
- <
- 143,144c136,137
- < except Exception, e:
- < module.fail_json(msg="Error authenticating to the keystone: %s " % e.message)
- ---
- > except Exception, e:
- > module.fail_json(msg = "Error authenticating to the keystone: %s " % e.message)
- 145a139
- >
- 147,148c141
- <
- < def _get_endpoint(module, client, endpoint_type):
- ---
- > def _get_endpoint(module, client):
- 150c143
- < endpoint = client.service_catalog.url_for(service_type='image', endpoint_type=endpoint_type)
- ---
- > endpoint = client.service_catalog.url_for(service_type='image', endpoint_type='publicURL')
- 152c145
- < module.fail_json(msg="Error getting endpoint for glance: %s" % e.message)
- ---
- > module.fail_json(msg = "Error getting endpoint for glance: %s" % e.message)
- 155d147
- <
- 159c151
- < endpoint =_get_endpoint(module, _ksclient, kwargs.get('endpoint_type'))
- ---
- > endpoint =_get_endpoint(module, _ksclient)
- 166c158
- < module.fail_json(msg="Error in connecting to glance: %s" % e.message)
- ---
- > module.fail_json(msg = "Error in connecting to glance: %s" %e.message)
- 169d160
- <
- 175c166
- < return None
- ---
- > return None
- 177,178c168
- < module.fail_json(msg="Error in fetching image list: %s" % e.message)
- <
- ---
- > module.fail_json(msg = "Error in fetching image list: %s" %e.message)
- 189c179
- < try:
- ---
- > try:
- 200,201c190,191
- < except Exception, e:
- < module.fail_json(msg="Error in creating image: %s" % e.message)
- ---
- > except Exception, e:
- > module.fail_json(msg = "Error in creating image: %s" %e.message)
- 203c193
- < module.exit_json(changed=True, result=image.status, id=image.id)
- ---
- > module.exit_json(changed = True, result = image.status, id=image.id)
- 205,206c195
- < module.fail_json(msg=" The module timed out, please check manually " + image.status)
- <
- ---
- > module.fail_json(msg = " The module timed out, please check manually " + image.status)
- 209c198
- < try:
- ---
- > try:
- 214,217c203,205
- < module.fail_json(msg="Error in deleting image: %s" % e.message)
- < module.exit_json(changed=True, result="Deleted")
- <
- <
- ---
- > module.fail_json(msg = "Error in deleting image: %s" %e.message)
- > module.exit_json(changed = True, result = "Deleted")
- >
- 219c207
- <
- ---
- >
- 235,238c223,225
- < timeout = dict(default=180),
- < file = dict(default=None),
- < endpoint_type = dict(default='publicURL', choices=['publicURL', 'internalURL']),
- < state = dict(default='present', choices=['absent', 'present'])
- ---
- > timeout = dict(default=180),
- > file = dict(default=None),
- > state = dict(default='present', choices=['absent', 'present'])
- 244c231
- < module.fail_json(msg="Either file or copy_from variable should be set to create the image")
- ---
- > module.fail_json(msg = "Either file or copy_from variable should be set to create the image")
- 249c236
- < module.exit_json(changed=False, id=id, result="success")
- ---
- > module.exit_json(changed = False, id = id, result = "success")
- 254,255c241,242
- < if not id:
- < module.exit_json(changed=False, result="Success")
- ---
- > if not id:
- > module.exit_json(changed = False, result = "Success")
- 261a249
- >
- diff -r ansible/ansible/library/cloud/keystone_user ansible-1.6.6/library/cloud/keystone_user
- 340c340
- < msg="exception: %s" % e)
- ---
- > msg="exception: %s" % e.message)
- 342c342
- < module.fail_json(msg="exception: %s" % e)
- ---
- > module.fail_json(msg=e.message)
- diff -r ansible/ansible/library/cloud/linode ansible-1.6.6/library/cloud/linode
- 91c91
- < requirements: [ "linode-python", "pycurl" ]
- ---
- > requirements: [ "linode-python" ]
- 159,165c159,161
- < import pycurl
- < except ImportError:
- < print("failed=True msg='pycurl required for this module'")
- < sys.exit(1)
- <
- <
- < try:
- ---
- > # linode module raise warning due to ssl - silently ignore them ...
- > import warnings
- > warnings.simplefilter("ignore")
- 170d165
- <
- diff -r ansible/ansible/library/cloud/nova_keypair ansible-1.6.6/library/cloud/nova_keypair
- 22c22
- < from novaclient import exceptions as exc
- ---
- > from novaclient import exceptions
- 118,121c118
- < if module.params['public_key'] and (module.params['public_key'] != key.public_key ):
- < module.fail_json(msg = "name {} present but key hash not the same as offered. Delete key first.".format(key['name']))
- < else:
- < module.exit_json(changed = False, result = "Key present")
- ---
- > module.exit_json(changed = False, result = "Key present")
- diff -r ansible/ansible/library/cloud/quantum_floating_ip ansible-1.6.6/library/cloud/quantum_floating_ip
- 145d144
- < subnet_id = None
- 147,149d145
- < kwargs = {'name': internal_network_name}
- < networks = neutron.list_networks(**kwargs)
- < network_id = networks['networks'][0]['id']
- 151,152c147
- < 'network_id': network_id,
- < 'ip_version': 4
- ---
- > 'name': internal_network_name,
- 154,155c149,150
- < subnets = neutron.list_subnets(**kwargs)
- < subnet_id = subnets['subnets'][0]['id']
- ---
- > networks = neutron.list_networks(**kwargs)
- > subnet_id = networks['networks'][0]['subnets'][0]
- 186c181
- < def _create_floating_ip(neutron, module, port_id, net_id, fixed_ip):
- ---
- > def _create_floating_ip(neutron, module, port_id, net_id):
- 189,190c184
- < 'floating_network_id': net_id,
- < 'fixed_ip_address': fixed_ip
- ---
- > 'floating_network_id': net_id
- 260c254
- < _create_floating_ip(neutron, module, port_id, net_id, fixed_ip)
- ---
- > _create_floating_ip(neutron, module, port_id, net_id)
- diff -r ansible/ansible/library/cloud/quantum_subnet ansible-1.6.6/library/cloud/quantum_subnet
- 71,75d70
- < name:
- < description:
- < - The name of the subnet that should be created
- < required: true
- < default: None
- diff -r ansible/ansible/library/cloud/rax ansible-1.6.6/library/cloud/rax
- 203c203
- < def server_to_dict(obj):
- ---
- > def pyrax_object_to_dict(obj):
- 219c219
- < extra_create_args, existing=[]):
- ---
- > extra_create_args):
- 269c269
- < instance = server_to_dict(server)
- ---
- > instance = pyrax_object_to_dict(server)
- 277,279d276
- < untouched = [server_to_dict(s) for s in existing]
- < instances = success + untouched
- <
- 283c280
- < 'instances': instances,
- ---
- > 'instances': success + error + timeout,
- 288c285
- < 'instances': [i['id'] for i in instances],
- ---
- > 'instances': [i['id'] for i in success + error + timeout],
- 306c303
- < def delete(module, instance_ids, wait, wait_timeout, kept=[]):
- ---
- > def delete(module, instance_ids, wait, wait_timeout):
- 324c321
- < instance = server_to_dict(server)
- ---
- > instance = pyrax_object_to_dict(server)
- 338d334
- < instances[instance_id]['rax_status'] = 'DELETED'
- 354,355d349
- < instances = [server_to_dict(s) for s in kept]
- <
- 359c353
- < 'instances': instances,
- ---
- > 'instances': success + error + timeout,
- 364c358
- < 'instances': [i['id'] for i in instances],
- ---
- > 'instances': [i['id'] for i in success + error + timeout],
- 394a389,391
- > for key, value in meta.items():
- > meta[key] = repr(value)
- >
- 401,409d397
- < # Normalize and ensure all metadata values are strings
- < for k, v in meta.items():
- < if isinstance(v, list):
- < meta[k] = ','.join(['%s' % i for i in v])
- < elif isinstance(v, dict):
- < meta[k] = json.dumps(v)
- < elif not isinstance(v, basestring):
- < meta[k] = '%s' % v
- <
- 516d503
- < kept = servers[:count]
- 521,522c508
- < delete(module, instance_ids, wait, wait_timeout,
- < kept=kept)
- ---
- > delete(module, instance_ids, wait, wait_timeout)
- 533,539c519
- < instances = []
- < instance_ids = []
- < for server in servers:
- < instances.append(server_to_dict(server))
- < instance_ids.append(server.id)
- < module.exit_json(changed=False, action=None,
- < instances=instances,
- ---
- > module.exit_json(changed=False, action=None, instances=[],
- 541c521
- < instance_ids={'instances': instance_ids,
- ---
- > instance_ids={'instances': [],
- 591c571
- < instances.append(server_to_dict(server))
- ---
- > instances.append(pyrax_object_to_dict(server))
- 604,605c584
- < wait, wait_timeout, disk_config, group, nics, extra_create_args,
- < existing=servers)
- ---
- > wait, wait_timeout, disk_config, group, nics, extra_create_args)
- diff -r ansible/ansible/library/cloud/rax_cbs ansible-1.6.6/library/cloud/rax_cbs
- 144,145d143
- < except pyrax.exc.NotFound:
- < pass
- diff -r ansible/ansible/library/cloud/rax_dns ansible-1.6.6/library/cloud/rax_dns
- 47,50d46
- < notes:
- < - "It is recommended that plays utilizing this module be run with C(serial: 1)
- < to avoid exceeding the API request limit imposed by the Rackspace CloudDNS
- < API"
- 129c125
- < module.fail_json(msg='%s' % e.message)
- ---
- > module.fail_json('%s' % e.message)
- diff -r ansible/ansible/library/cloud/rax_dns_record ansible-1.6.6/library/cloud/rax_dns_record
- 70,73d69
- < notes:
- < - "It is recommended that plays utilizing this module be run with C(serial: 1)
- < to avoid exceeding the API request limit imposed by the Rackspace CloudDNS
- < API"
- Only in ansible/ansible/library/cloud: rax_meta
- Only in ansible/ansible/library/cloud: rax_scaling_group
- Only in ansible/ansible/library/cloud: rax_scaling_policy
- diff -r ansible/ansible/library/cloud/rds ansible-1.6.6/library/cloud/rds
- 62a63
- > choices: [ 'db.t1.micro', 'db.m1.small', 'db.m1.medium', 'db.m1.large', 'db.m1.xlarge', 'db.m2.xlarge', 'db.m2.2xlarge', 'db.m2.4xlarge', 'db.m3.medium', 'db.m3.large', 'db.m3.xlarge', 'db.m3.2xlarge', 'db.cr1.8xlarge' ]
- 292c293
- < instance_type = dict(aliases=['type'], required=False),
- ---
- > instance_type = dict(aliases=['type'], choices=['db.t1.micro', 'db.m1.small', 'db.m1.medium', 'db.m1.large', 'db.m1.xlarge', 'db.m2.xlarge', 'db.m2.2xlarge', 'db.m2.4xlarge', 'db.m3.medium', 'db.m3.large', 'db.m3.xlarge', 'db.m3.2xlarge', 'db.cr1.8xlarge'], required=False),
- 302c303
- < vpc_security_groups = dict(type='list', required=False),
- ---
- > vpc_security_groups = dict(required=False),
- 467,470c468
- < groups_list = []
- < for x in vpc_security_groups:
- < groups_list.append(boto.rds.VPCSecurityGroupMembership(vpc_group=x))
- < params["vpc_security_groups"] = groups_list
- ---
- > params["vpc_security_groups"] = vpc_security_groups.split(',')
- 545,550d542
- <
- < # The name of the database has now changed, so we have
- < # to force result to contain the new instance, otherwise
- < # the call below to get_current_resource will fail since it
- < # will be looking for the old instance name.
- < result.id = new_instance_name
- 629,632d620
- < if resource.vpc_security_groups is not None:
- < d["vpc_security_groups"] = ','.join(x.vpc_group for x in resource.vpc_security_groups)
- < else:
- < d["vpc_security_groups"] = None
- 636d623
- < d["vpc_security_groups"] = None
- diff -r ansible/ansible/library/cloud/rds_param_group ansible-1.6.6/library/cloud/rds_param_group
- 162,170c162,165
- < try:
- < for modifier in INT_MODIFIERS.keys():
- < if value.endswith(modifier):
- < converted_value = int(value[:-1]) * INT_MODIFIERS[modifier]
- < converted_value = int(converted_value)
- < except ValueError:
- < # may be based on a variable (ie. {foo*3/4}) so
- < # just pass it on through to boto
- < converted_value = str(value)
- ---
- > for modifier in INT_MODIFIERS.keys():
- > if value.endswith(modifier):
- > converted_value = int(value[:-1]) * INT_MODIFIERS[modifier]
- > converted_value = int(converted_value)
- 198,207c193
- < try:
- < old_value = param.value
- < except ValueError:
- < # some versions of boto have problems with retrieving
- < # integer values from params that may have their value
- < # based on a variable (ie. {foo*3/4}), so grab it in a
- < # way that bypasses the property functions
- < old_value = param._value
- <
- < if old_value != new_value:
- ---
- > if param.value != new_value:
- diff -r ansible/ansible/library/cloud/vsphere_guest ansible-1.6.6/library/cloud/vsphere_guest
- 91,96d90
- < vm_hw_version:
- < description:
- < - Desired hardware version identifier (for example, "vmx-08" for vms that needs to be managed with vSphere Client). Note that changing hardware version of existing vm is not supported.
- < required: false
- < default: null
- < version_added: "1.7"
- 505c499
- < if int(vm_hardware['memory_mb']) != vm.properties.config.hardware.memoryMB:
- ---
- > if vm_hardware['memory_mb'] != vm.properties.config.hardware.memoryMB:
- 513c507
- < elif int(vm_hardware['memory_mb']) < vm.properties.config.hardware.memoryMB:
- ---
- > elif vm_hardware['memory_mb'] < vm.properties.config.hardware.memoryMB:
- 523c517
- < elif int(vm_hardware['memory_mb']) < vm.properties.config.hardware.memoryMB:
- ---
- > elif vm_hardware['memory_mb'] < vm.properties.config.hardware.memoryMB:
- 534c528
- < if int(vm_hardware['num_cpus']) != vm.properties.config.hardware.numCPU:
- ---
- > if vm_hardware['num_cpus'] != vm.properties.config.hardware.numCPU:
- 542c536
- < elif int(vm_hardware['num_cpus']) < vm.properties.config.hardware.numCPU:
- ---
- > elif vm_hardware['num_cpus'] < vm.properties.config.hardware.numCPU:
- 553c547
- < elif int(vm_hardware['num_cpus']) < vm.properties.config.hardware.numCPU:
- ---
- > elif vm_hardware['num_cpus'] < vm.properties.config.hardware.numCPU:
- 602c596
- < def create_vm(vsphere_client, module, esxi, resource_pool, cluster_name, guest, vm_extra_config, vm_hardware, vm_disk, vm_nic, vm_hw_version, state):
- ---
- > def create_vm(vsphere_client, module, esxi, resource_pool, cluster_name, guest, vm_extra_config, vm_hardware, vm_disk, vm_nic, state):
- 607,611c601,604
- < dclist = [k for k,
- < v in vsphere_client.get_datacenters().items() if v == datacenter]
- < if dclist:
- < dcmor=dclist[0]
- < else:
- ---
- > dcmor = [k for k,
- > v in vsphere_client.get_datacenters().items() if v == datacenter][0]
- >
- > if dcmor is None:
- 706,707d698
- < if vm_hw_version:
- < config.set_element_version(vm_hw_version)
- 714c705
- < if 'notes' in vm_extra_config:
- ---
- > if vm_extra_config['notes'] is not None:
- 737c728
- < disksize = int(vm_disk[disk]['size_gb'])
- ---
- > disksize = vm_disk[disk]['size_gb']
- 740c731
- < except (KeyError, ValueError):
- ---
- > except KeyError:
- 742c733,735
- < module.fail_json(msg="Error on %s definition. size needs to be specified as an integer." % disk)
- ---
- > module.fail_json(
- > msg="Error on %s definition. size needs to be"
- > " specified." % disk)
- 826,827c819,821
- < # We always need to get the vm because we are going to gather facts
- < vm = vsphere_client.get_vm_by_name(guest)
- ---
- > vm = None
- > if vm_extra_config or state in ['powered_on', 'powered_off']:
- > vm = vsphere_client.get_vm_by_name(guest)
- 1074d1067
- < vm_hw_version=dict(required=False, default=None, type='str'),
- 1111d1103
- < vm_hw_version = module.params['vm_hw_version']
- 1213d1204
- < vm_hw_version=vm_hw_version,
- diff -r ansible/ansible/library/commands/command ansible-1.6.6/library/commands/command
- 80c80
- < # Example from Ansible Playbooks.
- ---
- > # Example from Ansible Playbooks
- 83c83
- < # Run the command if the specified file does not exist.
- ---
- > # Run the command if the specified file does not exist
- 85,92d84
- <
- < # You can also use the 'args' form to provide the options. This command
- < # will change the working directory to somedir/ and will only run when
- < # /path/to/database doesn't exist.
- < - command: /usr/bin/make_database.sh arg1 arg2
- < args:
- < chdir: somedir/
- < creates: /path/to/database
- diff -r ansible/ansible/library/commands/shell ansible-1.6.6/library/commands/shell
- 56c56
- < # file on the remote.
- ---
- > # file on the remote
- 58,68d57
- <
- < # Change the working directory to somedir/ before executing the command.
- < - shell: somescript.sh >> somelog.txt chdir=somedir/
- <
- < # You can also use the 'args' form to provide the options. This command
- < # will change the working directory to somedir/ and will only run when
- < # somedir/somelog.txt doesn't exist.
- < - shell: somescript.sh >> somelog.txt
- < args:
- < chdir: somedir/
- < creates: somelog.txt
- diff -r ansible/ansible/library/database/mysql_db ansible-1.6.6/library/database/mysql_db
- 53c53
- < - Port of the MySQL server. Requires login_host be defined as other then localhost if login_port is used
- ---
- > - Port of the MySQL server
- 143,145d142
- < if not os.path.exists(target):
- < return module.fail_json(msg="target %s does not exist on the host" % target)
- <
- 154,169c151
- < gunzip_path = module.get_bin_path('gunzip')
- < if gunzip_path:
- < rc, stdout, stderr = module.run_command('%s %s' % (gunzip_path, target))
- < if rc != 0:
- < return rc, stdout, stderr
- < cmd += " < %s" % pipes.quote(os.path.splitext(target)[0])
- < rc, stdout, stderr = module.run_command(cmd, use_unsafe_shell=True)
- < if rc != 0:
- < return rc, stdout, stderr
- < gzip_path = module.get_bin_path('gzip')
- < if gzip_path:
- < rc, stdout, stderr = module.run_command('%s %s' % (gzip_path, os.path.splitext(target)[0]))
- < else:
- < module.fail_json(msg="gzip command not found")
- < else:
- < module.fail_json(msg="gunzip command not found")
- ---
- > cmd = 'gunzip < ' + pipes.quote(target) + ' | ' + cmd
- 171,186c153
- < bunzip2_path = module.get_bin_path('bunzip2')
- < if bunzip2_path:
- < rc, stdout, stderr = module.run_command('%s %s' % (bunzip2_path, target))
- < if rc != 0:
- < return rc, stdout, stderr
- < cmd += " < %s" % pipes.quote(os.path.splitext(target)[0])
- < rc, stdout, stderr = module.run_command(cmd, use_unsafe_shell=True)
- < if rc != 0:
- < return rc, stdout, stderr
- < bzip2_path = module.get_bin_path('bzip2')
- < if bzip2_path:
- < rc, stdout, stderr = module.run_command('%s %s' % (bzip2_path, os.path.splitext(target)[0]))
- < else:
- < module.fail_json(msg="bzip2 command not found")
- < else:
- < module.fail_json(msg="bunzip2 command not found")
- ---
- > cmd = 'bunzip2 < ' + pipes.quote(target) + ' | ' + cmd
- 189c156
- < rc, stdout, stderr = module.run_command(cmd, use_unsafe_shell=True)
- ---
- > rc, stdout, stderr = module.run_command(cmd, use_unsafe_shell=True)
- 268c235
- < name=dict(required=True, aliases=['db']),
- ---
- > db=dict(required=True, aliases=['name']),
- 279c246
- < db = module.params["name"]
- ---
- > db = module.params["db"]
- 311,312d277
- < elif module.params["login_port"] != "3306" and module.params["login_host"] == "localhost":
- < module.fail_json(msg="login_host is required when login_port is defined, login_host cannot be localhost when login_port is defined")
- 317,321c282
- < if "Unknown database" in str(e):
- < errno, errstr = e.args
- < module.fail_json(msg="ERROR: %s %s" % (errno, errstr))
- < else:
- < module.fail_json(msg="unable to connect, check login_user and login_password are correct, or alternatively check ~/.my.cnf contains credentials")
- ---
- > module.fail_json(msg="unable to connect, check login_user and login_password are correct, or alternatively check ~/.my.cnf contains credentials")
- 326,329c287
- < try:
- < changed = db_delete(cursor, db)
- < except Exception, e:
- < module.fail_json(msg="error deleting database: " + str(e))
- ---
- > changed = db_delete(cursor, db)
- 350,353c308
- < try:
- < changed = db_create(cursor, db, encoding, collation)
- < except Exception, e:
- < module.fail_json(msg="error creating database: " + str(e))
- ---
- > changed = db_create(cursor, db, encoding, collation)
- diff -r ansible/ansible/library/database/mysql_user ansible-1.6.6/library/database/mysql_user
- 123,125d122
- < # Specify grants composed of more than one word
- < - mysql_user: name=replication password=12345 priv=*.*:"REPLICATION CLIENT" state=present
- <
- diff -r ansible/ansible/library/database/mysql_variables ansible-1.6.6/library/database/mysql_variables
- 59,60c59,60
- < # Check for sync_binlog setting
- < - mysql_variables: variable=sync_binlog
- ---
- > # Check for sync_binary_log setting
- > - mysql_variables: variable=sync_binary_log
- diff -r ansible/ansible/library/database/postgresql_user ansible-1.6.6/library/database/postgresql_user
- 47c47
- < - "When passing an encrypted password, the encrypted parameter must also be true, and it must be generated with the format C('str[\\"md5\\"] + md5[ password + username ]'), resulting in a total of 35 characters. An easy way to do this is: C(echo \\"md5`echo -n \\"verysecretpasswordJOE\\" | md5`\\")."
- ---
- > - "When passing an encrypted password it must be generated with the format C('str[\\"md5\\"] + md5[ password + username ]'), resulting in a total of 35 characters. An easy way to do this is: C(echo \\"md5`echo -n \\"verysecretpasswordJOE\\" | md5`\\")."
- diff -r ansible/ansible/library/files/acl ansible-1.6.6/library/files/acl
- 66c66
- < - the entity type of the ACL to apply, see setfacl documentation for more info.
- ---
- > - if the target is a directory, setting this to yes will make it the default acl for entities created inside the directory. It causes an error if name is a file.
- diff -r ansible/ansible/library/files/assemble ansible-1.6.6/library/files/assemble
- 77c77
- < all files are assembled. All "\\" (backslash) must be escaped as
- ---
- > all files are assembled. All "\" (backslash) must be escaped as
- 195c195
- < changed = module.set_fs_attributes_if_different(file_args, changed)
- ---
- > changed = module.set_file_attributes_if_different(file_args, changed)
- diff -r ansible/ansible/library/files/copy ansible-1.6.6/library/files/copy
- 86d85
- < extends_documentation_fragment: files
- 127c126
- < changed = module.set_fs_attributes_if_different(directory_args, changed)
- ---
- > changed = module.set_directory_attributes_if_different(directory_args, changed)
- 193,201d191
- < try:
- < # os.path.exists() can return false in some
- < # circumstances where the directory does not have
- < # the execute bit for the current user set, in
- < # which case the stat() call will raise an OSError
- < os.stat(os.path.dirname(dest))
- < except OSError, e:
- < if "permission denied" in str(e).lower():
- < module.fail_json(msg="Destination directory %s is not accessible" % (os.path.dirname(dest)))
- 237c227
- < res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'])
- ---
- > res_args['changed'] = module.set_file_attributes_if_different(file_args, res_args['changed'])
- diff -r ansible/ansible/library/files/file ansible-1.6.6/library/files/file
- 167a168
- > os.makedirs(path)
- 169,176d169
- < curpath = ''
- < for dirname in path.split('/'):
- < curpath = '/'.join([curpath, dirname])
- < if not os.path.exists(curpath):
- < os.mkdir(curpath)
- < tmp_file_args = file_args.copy()
- < tmp_file_args['path']=curpath
- < changed = module.set_fs_attributes_if_different(tmp_file_args, changed)
- 206c199
- < module.fail_json(path=path, msg='refusing to convert between %s and %s for %s' % (prev_state, state, path))
- ---
- > module.fail_json(path=path, msg='refusing to convert between %s and %s for %s' % (prev_state, state, src))
- 211c204
- < module.fail_json(path=path, msg='refusing to convert between %s and %s for %s' % (prev_state, state, path))
- ---
- > module.fail_json(path=path, msg='refusing to convert between %s and %s for %s' % (prev_state, state, src))
- 253,255d245
- <
- < if module.check_mode and not os.path.exists(path):
- < module.exit_json(dest=path, src=src, changed=changed)
- diff -r ansible/ansible/library/files/ini_file ansible-1.6.6/library/files/ini_file
- 98d97
- < cp.optionxform = identity
- 155,167d153
- < # identity
- <
- < def identity(arg):
- < """
- < This function simply returns its argument. It serves as a
- < replacement for ConfigParser.optionxform, which by default
- < changes arguments to lower case. The identity function is a
- < better choice than str() or unicode(), because it is
- < encoding-agnostic.
- < """
- < return arg
- <
- < # ==============================================================
- 196c182
- < changed = module.set_fs_attributes_if_different(file_args, changed)
- ---
- > changed = module.set_file_attributes_if_different(file_args, changed)
- diff -r ansible/ansible/library/files/lineinfile ansible-1.6.6/library/files/lineinfile
- 5d4
- < # (c) 2014, Ahti Kitsik <[email protected]>
- 22d20
- < import pipes
- 30c28
- < author: Daniel Hokka Zakrisson, Ahti Kitsik
- ---
- > author: Daniel Hokka Zakrisson
- 115c113
- < - validation to run before copying into place. The command is passed
- ---
- > - validation to run before copying into place. The command is passed
- 167c165
- < module.atomic_move(tmpfile, os.path.realpath(dest))
- ---
- > module.atomic_move(tmpfile, dest)
- 172c170
- < if module.set_fs_attributes_if_different(file_args, False):
- ---
- > if module.set_file_attributes_if_different(file_args, False):
- 256,260d253
- <
- < # If the file is not empty then ensure there's a newline before the added line
- < if len(lines)>0 and not (lines[-1].endswith('\n') or lines[-1].endswith('\r')):
- < lines.append(os.linesep)
- <
- 363,370c356,358
- < # Replace escape sequences like '\n' while being sure
- < # not to replace octal escape sequences (\ooo) since they
- < # match the backref syntax
- < if backrefs:
- < line = re.sub(r'(\\[0-9]{1,3})', r'\\\1', params['line'])
- < else:
- < line = params['line']
- < line = module.safe_eval(pipes.quote(line))
- ---
- > # Replace the newline character with an actual newline. Don't replace
- > # escaped \\n, hence sub and not str.replace.
- > line = re.sub(r'\n', os.linesep, params['line'])
- diff -r ansible/ansible/library/files/replace ansible-1.6.6/library/files/replace
- 144c144
- < if result[1] > 0 and contents != result[0]:
- ---
- > if result[1] > 0:
- diff -r ansible/ansible/library/files/synchronize ansible-1.6.6/library/files/synchronize
- 224,225c224,225
- < source = '"' + module.params['src'] + '"'
- < dest = '"' + module.params['dest'] + '"'
- ---
- > source = module.params['src']
- > dest = module.params['dest']
- diff -r ansible/ansible/library/files/unarchive ansible-1.6.6/library/files/unarchive
- 123,124c123,124
- < cmd = '%s -x%sf "%s"' % (self.cmd_path, self.zipflag, self.src)
- < rc, out, err = self.module.run_command(cmd, cwd=self.dest)
- ---
- > cmd = '%s -C "%s" -x%sf "%s"' % (self.cmd_path, self.dest, self.zipflag, self.src)
- > rc, out, err = self.module.run_command(cmd)
- 237,239c237
- < res_args['extract_results'] = handler.unarchive()
- < if res_args['extract_results']['rc'] != 0:
- < module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args)
- ---
- > results = handler.unarchive()
- diff -r ansible/ansible/library/monitoring/monit ansible-1.6.6/library/monitoring/monit
- 49a50,51
- > import pipes
- >
- 67,68d68
- < if rc != 0:
- < module.fail_json(msg='monit reload failed', stdout=out, stderr=err)
- 71,89c71,72
- < def status():
- < """Return the status of the process in monit, or the empty string if not present."""
- < rc, out, err = module.run_command('%s summary' % MONIT, check_rc=True)
- < for line in out.split('\n'):
- < # Sample output lines:
- < # Process 'name' Running
- < # Process 'name' Running - restart pending
- < parts = line.lower().split()
- < if len(parts) > 2 and parts[0] == 'process' and parts[1] == "'%s'" % name:
- < return ' '.join(parts[2:])
- < else:
- < return ''
- <
- < def run_command(command):
- < """Runs a monit command, and returns the new status."""
- < module.run_command('%s %s %s' % (MONIT, command, name), check_rc=True)
- < return status()
- <
- < present = status() != ''
- ---
- > rc, out, err = module.run_command('%s summary | grep "Process \'%s\'"' % (MONIT, pipes.quote(name)), use_unsafe_shell=True)
- > present = name in out
- 98,101c81,83
- < status = run_command('reload')
- < if status == '':
- < module.fail_json(msg='%s process not configured with monit' % name, name=name, state=state)
- < else:
- ---
- > module.run_command('%s reload' % MONIT, check_rc=True)
- > rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, pipes.quote(name)), use_unsafe_shell=True)
- > if name in out:
- 102a85,87
- > else:
- > module.fail_json(msg=out, name=name, state=state)
- >
- 105c90,94
- < running = 'running' in status()
- ---
- > rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, pipes.quote(name)), use_unsafe_shell=True)
- > running = 'running' in out.lower()
- >
- > if running and (state == 'started' or state == 'monitored'):
- > module.exit_json(changed=False, name=name, state=state)
- 107c96
- < if running and state in ['started', 'monitored']:
- ---
- > if running and state == 'monitored':
- 113,114c102,104
- < status = run_command('stop')
- < if status in ['not monitored'] or 'stop pending' in status:
- ---
- > module.run_command('%s stop %s' % (MONIT, name))
- > rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, pipes.quote(name)), use_unsafe_shell=True)
- > if 'not monitored' in out.lower() or 'stop pending' in out.lower():
- 116c106
- < module.fail_json(msg='%s process not stopped' % name, status=status)
- ---
- > module.fail_json(msg=out)
- 121,122c111,114
- < status = run_command('unmonitor')
- < if status in ['not monitored']:
- ---
- > module.run_command('%s unmonitor %s' % (MONIT, name))
- > # FIXME: DRY FOLKS!
- > rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, pipes.quote(name)), use_unsafe_shell=True)
- > if 'not monitored' in out.lower():
- 124c116
- < module.fail_json(msg='%s process not unmonitored' % name, status=status)
- ---
- > module.fail_json(msg=out)
- 129,130c121,123
- < status = run_command('restart')
- < if status in ['initializing', 'running'] or 'restart pending' in status:
- ---
- > module.run_command('%s restart %s' % (MONIT, name))
- > rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, name))
- > if 'initializing' in out.lower() or 'restart pending' in out.lower():
- 132c125
- < module.fail_json(msg='%s process not restarted' % name, status=status)
- ---
- > module.fail_json(msg=out)
- 137,138c130,132
- < status = run_command('start')
- < if status in ['initializing', 'running'] or 'start pending' in status:
- ---
- > module.run_command('%s start %s' % (MONIT, name))
- > rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, name))
- > if 'initializing' in out.lower() or 'start pending' in out.lower():
- 140c134
- < module.fail_json(msg='%s process not started' % name, status=status)
- ---
- > module.fail_json(msg=out)
- 145,146c139,141
- < status = run_command('monitor')
- < if status not in ['not monitored']:
- ---
- > module.run_command('%s monitor %s' % (MONIT, name))
- > rc, out, err = module.run_command('%s summary | grep %s' % (MONIT, name))
- > if 'initializing' in out.lower() or 'start pending' in out.lower():
- 148c143
- < module.fail_json(msg='%s process not monitored' % name, status=status)
- ---
- > module.fail_json(msg=out)
- Only in ansible/ansible/library/monitoring: stackdriver
- diff -r ansible/ansible/library/net_infrastructure/bigip_facts ansible-1.6.6/library/net_infrastructure/bigip_facts
- 108d107
- < from suds import MethodNotFound
- 116a116
- > from suds import MethodNotFound
- 1583c1583
- < module.fail_json(msg="the python suds and bigsuds modules is required")
- ---
- > module.fail_json(msg="the python bigsuds module is required")
- diff -r ansible/ansible/library/net_infrastructure/bigip_monitor_http ansible-1.6.6/library/net_infrastructure/bigip_monitor_http
- 229,237c229
- < try:
- < return str_property == api.LocalLB.Monitor.get_template_string_property([monitor], [str_property['type']])[0]
- < except bigsuds.OperationFailed, e:
- < # happens in check mode if not created yet
- < if "was not found" in str(e):
- < return True
- < else:
- < # genuine exception
- < raise
- ---
- > return str_property == api.LocalLB.Monitor.get_template_string_property([monitor], [str_property['type']])[0]
- 247,256c239
- < try:
- < return int_property == api.LocalLB.Monitor.get_template_integer_property([monitor], [int_property['type']])[0]
- < except bigsuds.OperationFailed, e:
- < # happens in check mode if not created yet
- < if "was not found" in str(e):
- < return True
- < else:
- < # genuine exception
- < raise
- <
- ---
- > return int_property == api.LocalLB.Monitor.get_template_integer_property([monitor], [int_property['type']])[0]
- diff -r ansible/ansible/library/net_infrastructure/bigip_monitor_tcp ansible-1.6.6/library/net_infrastructure/bigip_monitor_tcp
- 248,257c248
- < try:
- < return str_property == api.LocalLB.Monitor.get_template_string_property([monitor], [str_property['type']])[0]
- < except bigsuds.OperationFailed, e:
- < # happens in check mode if not created yet
- < if "was not found" in str(e):
- < return True
- < else:
- < # genuine exception
- < raise
- < return True
- ---
- > return str_property == api.LocalLB.Monitor.get_template_string_property([monitor], [str_property['type']])[0]
- 267,276c258
- < try:
- < return int_property == api.LocalLB.Monitor.get_template_integer_property([monitor], [int_property['type']])[0]
- < except bigsuds.OperationFailed, e:
- < # happens in check mode if not created yet
- < if "was not found" in str(e):
- < return True
- < else:
- < # genuine exception
- < raise
- < return True
- ---
- > return int_property == api.LocalLB.Monitor.get_template_integer_property([monitor], [int_property['type']])[0]
- diff -r ansible/ansible/library/net_infrastructure/openvswitch_bridge ansible-1.6.6/library/net_infrastructure/openvswitch_bridge
- 4,7d3
- < # (c) 2013, David Stygstra <[email protected]>
- < #
- < # This file is part of Ansible
- < #
- 25d20
- < author: David Stygstra
- diff -r ansible/ansible/library/net_infrastructure/openvswitch_port ansible-1.6.6/library/net_infrastructure/openvswitch_port
- 4,7d3
- < # (c) 2013, David Stygstra <[email protected]>
- < #
- < # This file is part of Ansible
- < #
- 25d20
- < author: David Stygstra
- diff -r ansible/ansible/library/network/get_url ansible-1.6.6/library/network/get_url
- 286c286
- < if stripped_sha256sum.lower() != destination_checksum:
- ---
- > if stripped_sha256sum != destination_checksum:
- 296c296
- < changed = module.set_fs_attributes_if_different(file_args, changed)
- ---
- > changed = module.set_file_attributes_if_different(file_args, changed)
- diff -r ansible/ansible/library/network/uri ansible-1.6.6/library/network/uri
- 417c417
- < changed = module.set_fs_attributes_if_different(file_args, changed)
- ---
- > changed = module.set_file_attributes_if_different(file_args, changed)
- Only in ansible-1.6.6/library/notification: stackdriver
- diff -r ansible/ansible/library/packaging/apt ansible-1.6.6/library/packaging/apt
- 93c93
- < - Path to a .deb package on the remote machine.
- ---
- > - Path to a local .deb package file to install.
- 128c128
- < # Only run "update_cache=yes" if the last one is more than 3600 seconds ago
- ---
- > # Only run "update_cache=yes" if the last one is more than more than 3600 seconds ago
- 370c370
- < def upgrade(m, mode="yes", force=False, default_release=None,
- ---
- > def upgrade(m, mode="yes", force=False,
- 406,409d405
- <
- < if default_release:
- < cmd += " -t '%s'" % (default_release,)
- <
- 424c420
- < package = dict(default=None, aliases=['pkg', 'name'], type='list'),
- ---
- > package = dict(default=None, aliases=['pkg', 'name']),
- 504,505c500
- < upgrade(module, p['upgrade'], force_yes,
- < p['default_release'], dpkg_options)
- ---
- > upgrade(module, p['upgrade'], force_yes, dpkg_options)
- 514c509
- < packages = p['package']
- ---
- > packages = p['package'].split(',')
- diff -r ansible/ansible/library/packaging/apt_key ansible-1.6.6/library/packaging/apt_key
- 156d155
- <
- 159,161d157
- < if info['status'] != 200:
- < module.fail_json(msg="Failed to download key at %s: %s" % (url, info['msg']))
- <
- diff -r ansible/ansible/library/packaging/composer ansible-1.6.6/library/packaging/composer
- 88c88
- < - composer: working_dir=/path/to/project
- ---
- > - composer: command=install working_dir=/path/to/project
- diff -r ansible/ansible/library/packaging/gem ansible-1.6.6/library/packaging/gem
- 94c94
- < return module.params['executable'].split(' ')
- ---
- > return module.params['executable']
- 96c96
- < return [ module.get_bin_path('gem', True) ]
- ---
- > return module.get_bin_path('gem', True)
- 99c99
- < cmd = get_rubygems_path(module) + [ '--version' ]
- ---
- > cmd = [ get_rubygems_path(module), '--version' ]
- 110c110
- < cmd = get_rubygems_path(module)
- ---
- > cmd = [ get_rubygems_path(module) ]
- 147c147
- < cmd = get_rubygems_path(module)
- ---
- > cmd = [ get_rubygems_path(module) ]
- 168c168
- < cmd = get_rubygems_path(module)
- ---
- > cmd = [ get_rubygems_path(module) ]
- diff -r ansible/ansible/library/packaging/homebrew ansible-1.6.6/library/packaging/homebrew
- 47,52d46
- < upgrade_all:
- < description:
- < - upgrade all homebrew packages
- < required: false
- < default: no
- < choices: [ "yes", "no" ]
- 65c59
- < - homebrew: update_homebrew=yes upgrade_all=yes
- ---
- > - homebrew: update_homebrew=yes upgrade=yes
- 101d94
- < . # dots
- 109d101
- < . # dots
- 132d123
- < - dots
- 157d147
- < - dots
- 298,299c288
- < update_homebrew=False, upgrade_all=False,
- < install_options=None):
- ---
- > update_homebrew=False, install_options=None):
- 305d293
- < upgrade_all=upgrade_all,
- 429,431d416
- < if self.upgrade_all:
- < self._upgrade_all()
- <
- 472,492d456
- < # _upgrade_all --------------------------- {{{
- < def _upgrade_all(self):
- < rc, out, err = self.module.run_command([
- < self.brew_path,
- < 'upgrade',
- < ])
- < if rc == 0:
- < if not out:
- < self.message = 'Homebrew packages already upgraded.'
- <
- < else:
- < self.changed = True
- < self.message = 'Homebrew upgraded.'
- <
- < return True
- < else:
- < self.failed = True
- < self.message = err.strip()
- < raise HomebrewException(self.message)
- < # /_upgrade_all -------------------------- }}}
- <
- 776,780d739
- < upgrade_all=dict(
- < default="no",
- < aliases=["upgrade"],
- < type='bool',
- < ),
- 805c764
- < if state in ('head', ):
- ---
- > if state in ('head'):
- 817d775
- < upgrade_all = p['upgrade_all']
- 824c782
- < upgrade_all=upgrade_all, install_options=install_options)
- ---
- > install_options=install_options)
- diff -r ansible/ansible/library/packaging/npm ansible-1.6.6/library/packaging/npm
- 116c116
- < self.executable = kwargs['executable'].split(' ')
- ---
- > self.executable = kwargs['executable']
- 118c118
- < self.executable = [module.get_bin_path('npm', True)]
- ---
- > self.executable = module.get_bin_path('npm', True)
- 127c127
- < cmd = self.executable + args
- ---
- > cmd = [self.executable] + args
- diff -r ansible/ansible/library/packaging/openbsd_pkg ansible-1.6.6/library/packaging/openbsd_pkg
- 56,61d55
- <
- < # Specify a pkg flavour with '--'
- < - openbsd_pkg: name=vim--nox11 state=present
- <
- < # Specify the default flavour to avoid ambiguity errors
- < - openbsd_pkg: name=vim-- state=present
- diff -r ansible/ansible/library/packaging/pkgutil ansible-1.6.6/library/packaging/pkgutil
- 76a77,78
- > name = pipes.quote(name)
- > site = pipes.quote(site)
- 79,80c81,82
- < cmd += [ '-t', pipes.quote(site) ]
- < cmd.append(pipes.quote(name))
- ---
- > cmd += [ '-t', site ]
- > cmd.append(name)
- diff -r ansible/ansible/library/packaging/portage ansible-1.6.6/library/packaging/portage
- 365c365
- < module.exit_json(msg='Sync successfully finished.')
- ---
- > return
- diff -r ansible/ansible/library/packaging/portinstall ansible-1.6.6/library/packaging/portinstall
- 102,104c102,104
- < rc, out, err = module.run_command("%s %s" % (ports_glob_path, name))
- < #counts the numer of packages found
- < occurrences = out.count('\n')
- ---
- > rc, out, err = module.run_command("%s %s | wc" % (ports_glob_path, name))
- > parts = out.split()
- > occurrences = int(parts[0])
- 108,109c108,110
- < rc, out, err = module.run_command("%s %s" % (ports_glob_path, name_without_digits))
- < occurrences = out.count('\n')
- ---
- > rc, out, err = module.run_command("%s %s | wc" % (ports_glob_path, name_without_digits))
- > parts = out.split()
- > occurrences = int(parts[0])
- diff -r ansible/ansible/library/packaging/redhat_subscription ansible-1.6.6/library/packaging/redhat_subscription
- 375c375
- < except Exception, e:
- ---
- > except CommandException, e:
- 388c388
- < except Exception, e:
- ---
- > except CommandException, e:
- diff -r ansible/ansible/library/packaging/rhn_register ansible-1.6.6/library/packaging/rhn_register
- 272c272
- < for available_channel in stdout.rstrip().split('\n'): # .rstrip() because of \n at the end -> empty string at the end
- ---
- > for availaible_channel in stdout.rstrip().split('\n'): # .rstrip() because of \n at the end -> empty string at the end
- diff -r ansible/ansible/library/packaging/rpm_key ansible-1.6.6/library/packaging/rpm_key
- 159c159
- < return re.match('(0x)?[0-9a-f]{8}', keystr, flags=re.IGNORECASE)
- ---
- > return re.match('(0x)?(0-9a-f){8}', keystr, flags=re.IGNORECASE)
- diff -r ansible/ansible/library/packaging/yum ansible-1.6.6/library/packaging/yum
- 63,65c63,65
- < - I(Repoid) of repositories to enable for the install/update operation.
- < These repos will not persist beyond the transaction.
- < When specifying multiple repos, separate them with a ",".
- ---
- > - Repoid of repositories to enable for the install/update operation.
- > These repos will not persist beyond the transaction
- > multiple repos separated with a ','
- 73,75c73,75
- < - I(Repoid) of repositories to disable for the install/update operation.
- < These repos will not persist beyond the transaction.
- < When specifying multiple repos, separate them with a ",".
- ---
- > - I(repoid) of repositories to disable for the install/update operation
- > These repos will not persist beyond the transaction
- > Multiple repos separated with a ','
- 110c110
- < yum: name=httpd state=absent
- ---
- > yum: name=httpd state=removed
- 113c113
- < yum: name=httpd enablerepo=testing state=present
- ---
- > yum: name=httpd enablerepo=testing state=installed
- diff -r ansible/ansible/library/source_control/git ansible-1.6.6/library/source_control/git
- 64c64
- < required: false
- ---
- > requird: false
- 104c104,106
- < - If C(no), just returns information about the repository without updating.
- ---
- > - If C(yes), repository will be updated using the supplied
- > remote. Otherwise the repo will be left untouched.
- > Prior to 1.2, this was always 'yes' and could not be disabled.
- 402,404d403
- < (rc, out0, err0) = module.run_command([git_path, 'remote', 'set-url', remote, repo], cwd=dest)
- < if rc != 0:
- < module.fail_json(msg="Failed to set a new url %s for %s: %s" % (repo, remote, out0 + err0))
- 441c440
- < def switch_version(git_path, module, dest, remote, version, recursive):
- ---
- > def switch_version(git_path, module, dest, remote, version):
- 467,471c466,467
- < if recursive:
- < (rc, out2, err2) = submodule_update(git_path, module, dest)
- < out1 += out2
- < err1 += err1
- < return (rc, out1, err1)
- ---
- > (rc, out2, err2) = submodule_update(git_path, module, dest)
- > return (rc, out1 + out2, err1 + err2)
- 581c577
- < switch_version(git_path, module, dest, remote, version, recursive)
- ---
- > switch_version(git_path, module, dest, remote, version)
- diff -r ansible/ansible/library/source_control/subversion ansible-1.6.6/library/source_control/subversion
- 75,76c75
- < default: "no"
- < choices: [ "yes", "no" ]
- ---
- > default: False
- 79c78
- < - If C(yes), do export instead of checkout/update.
- ---
- > - If True, do export instead of checkout/update.
- 180c179
- < export=dict(default=False, required=False, type='bool'),
- ---
- > export=dict(default=False, required=False),
- diff -r ansible/ansible/library/system/alternatives ansible-1.6.6/library/system/alternatives
- 113,114d112
- < elif key == 'Link' and not link:
- < link = value
- 132c130
- < except subprocess.CalledProcessError, cpe:
- ---
- > except subprocess.CalledProcessError as cpe:
- diff -r ansible/ansible/library/system/cron ansible-1.6.6/library/system/cron
- 50c50
- < - The specific user whose crontab should be modified.
- ---
- > - The specific user who's crontab should be modified.
- 98c98
- < - Day of the week that the job should run ( 0-6 for Sunday-Saturday, *, etc )
- ---
- > - Day of the week that the job should run ( 0-7 for Sunday - Saturday, *, etc )
- 357,358d356
- < elif platform.system() == 'HP-UX':
- < return "%s %s %s" % (CRONCMD , '-l', pipes.quote(self.user))
- 369c367
- < if platform.system() in ['SunOS', 'HP-UX', 'AIX']:
- ---
- > if platform.system() in [ 'SunOS', 'AIX' ]:
- diff -r ansible/ansible/library/system/group ansible-1.6.6/library/system/group
- 124c124
- < return (0, '', '')
- ---
- > return (True, '', '')
- 209c209
- < return (0, '', '')
- ---
- > return (True, '', '')
- 248c248
- < return (0, '', '')
- ---
- > return (True, '', '')
- 290c290
- < return (0, '', '')
- ---
- > return (True, '', '')
- 332c332
- < return (0, '', '')
- ---
- > return (True, '', '')
- diff -r ansible/ansible/library/system/hostname ansible-1.6.6/library/system/hostname
- 42,45d41
- < # import module snippets
- < from ansible.module_utils.basic import *
- <
- <
- 142d137
- <
- 158c153
- < self.module.fail_json(msg="failed to write file: %s" %
- ---
- > self.module.fail_json(msg="failed to write file: %s" %
- 180a176,184
- > class DebianHostname(Hostname):
- > platform = 'Linux'
- > distribution = 'Debian'
- > strategy_class = DebianStrategy
- >
- > class UbuntuHostname(Hostname):
- > platform = 'Linux'
- > distribution = 'Ubuntu'
- > strategy_class = DebianStrategy
- 229a234,237
- > class RedHat5Hostname(Hostname):
- > platform = 'Linux'
- > distribution = 'Redhat'
- > strategy_class = RedHatStrategy
- 230a239,263
- > class RedHatServerHostname(Hostname):
- > platform = 'Linux'
- > distribution = 'Red hat enterprise linux server'
- > strategy_class = RedHatStrategy
- >
- > class RedHatWorkstationHostname(Hostname):
- > platform = 'Linux'
- > distribution = 'Red hat enterprise linux workstation'
- > strategy_class = RedHatStrategy
- >
- > class CentOSHostname(Hostname):
- > platform = 'Linux'
- > distribution = 'Centos'
- > strategy_class = RedHatStrategy
- >
- > class AmazonLinuxHostname(Hostname):
- > platform = 'Linux'
- > distribution = 'Amazon'
- > strategy_class = RedHatStrategy
- >
- > class ScientificLinuxHostname(Hostname):
- > platform = 'Linux'
- > distribution = 'Scientific'
- > strategy_class = RedHatStrategy
- >
- 274,276d306
- <
- < # ===========================================
- <
- 292,348d321
- < class RedHat5Hostname(Hostname):
- < platform = 'Linux'
- < distribution = 'Redhat'
- < strategy_class = RedHatStrategy
- <
- < class RedHatServerHostname(Hostname):
- < platform = 'Linux'
- < distribution = 'Red hat enterprise linux server'
- < if float(get_distribution_version()) >= 7:
- < strategy_class = FedoraStrategy
- < else:
- < strategy_class = RedHatStrategy
- <
- < class RedHatWorkstationHostname(Hostname):
- < platform = 'Linux'
- < distribution = 'Red hat enterprise linux workstation'
- < if float(get_distribution_version()) >= 7:
- < strategy_class = FedoraStrategy
- < else:
- < strategy_class = RedHatStrategy
- <
- < class CentOSHostname(Hostname):
- < platform = 'Linux'
- < distribution = 'Centos'
- < if float(get_distribution_version()) >= 7:
- < strategy_class = FedoraStrategy
- < else:
- < strategy_class = RedHatStrategy
- <
- < class ScientificLinuxHostname(Hostname):
- < platform = 'Linux'
- < distribution = 'Scientific'
- < if float(get_distribution_version()) >= 7:
- < strategy_class = FedoraStrategy
- < else:
- < strategy_class = RedHatStrategy
- <
- < class AmazonLinuxHostname(Hostname):
- < platform = 'Linux'
- < distribution = 'Amazon'
- < strategy_class = RedHatStrategy
- <
- < class DebianHostname(Hostname):
- < platform = 'Linux'
- < distribution = 'Debian'
- < strategy_class = DebianStrategy
- <
- < class UbuntuHostname(Hostname):
- < platform = 'Linux'
- < distribution = 'Ubuntu'
- < strategy_class = DebianStrategy
- <
- < class LinaroHostname(Hostname):
- < platform = 'Linux'
- < distribution = 'Linaro'
- < strategy_class = DebianStrategy
- <
- 373a347,348
- > # import module snippets
- > from ansible.module_utils.basic import *
- diff -r ansible/ansible/library/system/lvg ansible-1.6.6/library/system/lvg
- 70c70
- < # Create or resize a volume group on top of /dev/sdb1 and /dev/sdc5.
- ---
- > # Create or resize a volume group on top of /dev/sdb1 and /dev/sdc5.
- 92,101c92
- < def find_mapper_device_name(module, dm_device):
- < dmsetup_cmd = module.get_bin_path('dmsetup', True)
- < mapper_prefix = '/dev/mapper/'
- < rc, dm_name, err = module.run_command("%s info -C --noheadings -o name %s" % (dmsetup_cmd, dm_device))
- < if rc != 0:
- < module.fail_json(msg="Failed executing dmsetup command.", rc=rc, err=err)
- < mapper_device = mapper_prefix + dm_name.rstrip()
- < return mapper_device
- <
- < def parse_pvs(module, data):
- ---
- > def parse_pvs(data):
- 103d93
- < dm_prefix = '/dev/dm-'
- 106,107d95
- < if parts[0].startswith(dm_prefix):
- < parts[0] = find_mapper_device_name(module, parts[0])
- 120c108
- < vg_options=dict(default=''),
- ---
- > vg_options=dict(),
- 131c119
- < vgoptions = module.params['vg_options'].split()
- ---
- > vgoptions = module.params.get('vg_options', '').split()
- 140c128
- <
- ---
- >
- 154c142
- < pvs = parse_pvs(module, current_pvs)
- ---
- > pvs = parse_pvs(current_pvs)
- diff -r ansible/ansible/library/system/modprobe ansible-1.6.6/library/system/modprobe
- 4,7d3
- < # (c) 2013, David Stygstra <[email protected]>
- < #
- < # This file is part of Ansible
- < #
- 28d23
- < author: David Stygstra, Julien Dauphant, Matt Jeffery
- diff -r ansible/ansible/library/system/open_iscsi ansible-1.6.6/library/system/open_iscsi
- 165,166d164
- < elif rc == 21:
- < return False
- diff -r ansible/ansible/library/system/service ansible-1.6.6/library/system/service
- 391a392,393
- > location[binary] = None
- > for binary in binaries:
- 481c483
- < self.module.fail_json(msg='failure %d running systemctl show for %r: %s' % (rc, self.__systemd_unit, err))
- ---
- > self.module.fail_json('failure %d running systemctl show for %r: %s' % (self.__systemd_unit, rc, err))
- diff -r ansible/ansible/library/system/setup ansible-1.6.6/library/system/setup
- 57,59d56
- < - If the target host is Windows, you will not currently have the ability to use
- < C(fact_path) or C(filter) as this is provided by a simpler implementation of the module.
- < Different facts are returned for Windows hosts.
- diff -r ansible/ansible/library/system/ufw ansible-1.6.6/library/system/ufw
- 208,210d207
- < if('interface' in params and 'direction' not in params):
- < module.fail_json(msg="Direction must be specified when creating a rule on an interface")
- <
- diff -r ansible/ansible/library/system/user ansible-1.6.6/library/system/user
- 184c184
- < - user: name=johnd comment="John Doe" uid=1040 group=admin
- ---
- > - user: name=johnd comment="John Doe" uid=1040
- diff -r ansible/ansible/library/system/zfs ansible-1.6.6/library/system/zfs
- 313,325c313,320
- < def get_properties_by_name(propname):
- < cmd = [self.module.get_bin_path('zfs', True)]
- < cmd += ['get', '-H', propname, self.name]
- < rc, out, err = self.module.run_command(cmd)
- < return [l.split('\t')[1:3] for l in out.splitlines()]
- < properties = dict(get_properties_by_name('all'))
- < if 'share.*' in properties:
- < # Some ZFS pools list the sharenfs and sharesmb properties
- < # hierarchically as share.nfs and share.smb respectively.
- < del properties['share.*']
- < for p, v in get_properties_by_name('share.all'):
- < alias = p.replace('.', '') # share.nfs -> sharenfs (etc)
- < properties[alias] = v
- ---
- > cmd = [self.module.get_bin_path('zfs', True)]
- > cmd.append('get -H all')
- > cmd.append(self.name)
- > rc, out, err = self.module.run_command(' '.join(cmd))
- > properties = dict()
- > for l in out.splitlines():
- > p, v = l.split('\t')[1:3]
- > properties[p] = v
- diff -r ansible/ansible/library/utilities/include_vars ansible-1.6.6/library/utilities/include_vars
- 35,36c35
- < - "{{ ansible_distribution }}.yml"
- < - "{{ ansible_os_family }}.yml"
- ---
- > - "{{ ansible_os_distribution }}.yml"
- 38c37
- <
- ---
- >
- diff -r ansible/ansible/library/utilities/wait_for ansible-1.6.6/library/utilities/wait_for
- 180,202c180,193
- < os.stat(path)
- < if search_regex:
- < try:
- < f = open(path)
- < try:
- < if re.search(search_regex, f.read(), re.MULTILINE):
- < break
- < else:
- < time.sleep(1)
- < finally:
- < f.close()
- < except IOError:
- < time.sleep(1)
- < pass
- < else:
- < break
- < except OSError, e:
- < # File not present
- < if os.errno == 2:
- < time.sleep(1)
- < else:
- < elapsed = datetime.datetime.now() - start
- < module.fail_json(msg="Failed to stat %s, %s" % (path, e.strerror), elapsed=elapsed.seconds)
- ---
- > f = open(path)
- > try:
- > if search_regex:
- > if re.search(search_regex, f.read(), re.MULTILINE):
- > break
- > else:
- > time.sleep(1)
- > else:
- > break
- > finally:
- > f.close()
- > except IOError:
- > time.sleep(1)
- > pass
- diff -r ansible/ansible/library/web_infrastructure/django_manage ansible-1.6.6/library/web_infrastructure/django_manage
- 31c31
- < choices: [ 'cleanup', 'collectstatic', 'flush', 'loaddata', 'migrate', 'runfcgi', 'syncdb', 'test', 'validate', ]
- ---
- > choices: [ 'cleanup', 'flush', 'loaddata', 'runfcgi', 'syncdb', 'test', 'validate', 'migrate', 'collectstatic' ]
- 33c33
- < - The name of the Django management command to run. Built in commands are cleanup, collectstatic, flush, loaddata, migrate, runfcgi, syncdb, test, and validate. Other commands can be entered, but will fail if they're unknown to Django.
- ---
- > - The name of the Django management command to run. Allowed commands are cleanup, createcachetable, flush, loaddata, syncdb, test, validate.
- 147d146
- < os.environ["VIRTUAL_ENV"] = venv_param
- diff -r ansible/ansible/library/web_infrastructure/htpasswd ansible-1.6.6/library/web_infrastructure/htpasswd
- 166c166
- < if module.set_fs_attributes_if_different(file_args, False):
- ---
- > if module.set_file_attributes_if_different(file_args, False):
- Only in ansible/ansible/library: windows
- diff -r ansible/ansible/Makefile ansible-1.6.6/Makefile
- 78c78
- < ifneq ($(OFFICIAL),yes)
- ---
- > ifeq ($(OFFICIAL),)
- diff -r ansible/ansible/MANIFEST.in ansible-1.6.6/MANIFEST.in
- 6d5
- < include lib/ansible/module_common/*.ps1
- 10,11d8
- < include VERSION
- < include MANIFEST.in
- Only in ansible/ansible/packaging: arch
- Only in ansible/ansible/packaging: debian
- Only in ansible/ansible/packaging: gentoo
- Only in ansible/ansible/packaging: macports
- Only in ansible/ansible/packaging: port
- Only in ansible-1.6.6: PKG-INFO
- Only in ansible/ansible: plugins
- diff -r ansible/ansible/README.md ansible-1.6.6/README.md
- 45,46c45
- < Ansible was created by Michael DeHaan ([email protected]) and has contributions from over
- < 700 users (and growing). Thanks everyone!
- ---
- > Michael DeHaan -- [email protected]
- Only in ansible/ansible: RELEASES.txt
- Only in ansible-1.6.6: setup.cfg
- diff -r ansible/ansible/setup.py ansible-1.6.6/setup.py
- 50d49
- < 'ansible.runner.shell_plugins',
- Only in ansible/ansible: test
- Only in ansible/ansible: VERSION
Add Comment
Please, Sign In to add comment