Advertisement
yegoblynqueenne

Learning ordered/3 with correct </2, >/2

Nov 10th, 2020
594
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Prolog 8.50 KB | None | 0 0
  1. :-module(ordered, [background_knowledge/2
  2.                   ,metarules/2
  3.                   ,positive_example/2
  4.                   ,negative_example/2
  5.                   ]).
  6.  
  7.  
  8. :-use_module(configuration).
  9.  
  10. /** <module> A longer example for Veedrac on HN.
  11.  
  12. Initiate a multi-predicate learning session to learn all the targets
  13. jointly. Use dynamic learning to learn each target incrementally, by
  14. reusing each clause learned in an earlier iteration.
  15.  
  16. ordered/3 is true when len(n1) < len(n2) < len(n3):
  17.  
  18. ==
  19. ?- time(learn_dynamic([llength/2,shorter/2,ordered/3])).
  20. llength([],0).
  21. llength(A,B):-tail(A,C),p(B,D),llength(C,D).
  22. shorter(A,B):-llength(A,C),llength(B,D),s(C,D).
  23. ordered(A,B,C):-shorter(A,B),shorter(B,C).
  24. % 19,682 inferences, 0.016 CPU in 0.009 seconds (172% CPU, 1259648 Lips)
  25. true.
  26. ==
  27.  
  28. ordered_leq/3 is true when len(n1) =< len(n2) =< len(n3):
  29.  
  30. ==
  31. ?- time(learn_dynamic([llength/2,shorter/2,ordered_leq/3])).
  32. llength([],0).
  33. llength(A,B):-tail(A,C),p(B,D),llength(C,D).
  34. shorter(A,B):-llength(A,C),llength(B,D),leq(C,D).
  35. ordered_leq(A,B,C):-shorter(A,B),shorter(B,C).
  36. % 41,861 inferences, 0.016 CPU in 0.011 seconds (142% CPU, 2679104 Lips)
  37. true.
  38. ==
  39.  
  40. If something fails, check that the following configuration options are
  41. set:
  42.  
  43. ==
  44. ?- list_config.
  45. example_clauses(call)
  46. experiment_file(data/drafts/functions/ordered.pl,ordered)
  47. learner(louise)
  48. max_invented(0)
  49. minimal_program_size(2,inf)
  50. recursion_depth_limit(dynamic_learning,50000)
  51. recursive_reduction(false)
  52. reduction(plotkins)
  53. resolutions(5000)
  54. symbol_range(predicate,[P,Q,R,S,T])
  55. symbol_range(variable,[X,Y,Z,U,V,W])
  56. theorem_prover(resolution)
  57. unfold_invented(false)
  58. true.
  59. ==
  60.  
  61. */
  62.  
  63. % ================================================================================
  64. % Metarule constraints. This is actually a misnomer- those are _clause_
  65. % constraints, used to eliminate unwanted clauses. We use them here
  66. % mainly for aesthetical reasons.
  67. % ================================================================================
  68.  
  69. % This constraint cuts left-recursive shorter/2 clauses that make it
  70. % fiddly to run the learned hypothesis in Prolog.
  71. configuration:metarule_constraints(M,fail):-
  72.     M =.. [m,Id,P|Ps]
  73.     ,Id \= projection
  74.     ,left_recursive(P,Ps).
  75.  
  76. left_recursive(T,[T|_Ps]):-
  77.     !.
  78. left_recursive(T,[T,T|_Ps]):-
  79.     !.
  80.  
  81. % Cuts clause re-definitng llength/2 by shorter/2:
  82. % llength(A,B):-shorter(A,C),s(B,D),llength(C,D).
  83. % This would be fine in an exploratory session, say.
  84. configuration:metarule_constraints(M,fail):-
  85.     M =.. [m,_Id,llength,shorter|_].
  86.  
  87.  
  88. % Avoids definition of shorter/2 by tail/2:
  89. % shorter(A,B):-tail(A,C),tail(B,D),shorter(C,D).
  90. % This compares the tails of two lists, which is the same as comparing
  91. % the full lists, so it's correct, but again it's more useful in an
  92. % exploratory session.
  93. configuration:metarule_constraints(M,fail):-
  94.     M =.. [m,_Id,shorter,tail|_].
  95.  
  96.  
  97. % ================================================================================
  98. % Various configs
  99. % ================================================================================
  100.  
  101. % Sets the maximum number of predicates allowed to be invented.
  102. % It's set to 0 because we don't need to invent any. We still use
  103. % dynamic learning for for the ability to learn incrementally by reusing
  104. % predicates learned (or invented) in earlier learning steps.
  105. :- auxiliaries:set_configuration_option(max_invented, [0]).
  106.  
  107.  
  108. % ================================================================================
  109. % Declarations of Meta-Interpretive Learning problem elements
  110. % Elements of a MIL problem are: metarules, background knowledge (BK)
  111. % and examples.
  112. % ================================================================================
  113.  
  114. % Metarules for the experiment are derived from Prolog program
  115. % skeletons. They are essentially "program recipes" that can be reused
  116. % to construct different programs.
  117. configuration: list_rec_func metarule 'P(x,y):- Q(x,z),R(y,u),P(z,u)'.
  118. configuration: triadic_chain metarule 'P(x,z,y):- Q(x,z), R(z,y)'.
  119. configuration: list_comp metarule 'P(x,y):- Q(x,z), R(y,u), S(z,u)'.
  120.  
  121. % Metarule declarations - though specific metarules are declared for
  122. % each learning target, multi-predicate learning allows Louise to use
  123. % any of the declared metarules for any target.
  124. metarules(llength/2,[abduce,list_rec_func]).
  125. metarules(shorter/2,[list_comp]).
  126. metarules(ordered/3,[triadic_chain]).
  127. metarules(ordered_leq/3,[triadic_chain]).
  128.  
  129. % Background knowledge (BK) declarations - again, multi-predicate
  130. % learning means any BK predicate can be used for any target. For
  131. % example, we could leave the two latter lists empty and add s/2 to the
  132. % first.
  133. background_knowledge(llength/2, [tail/2,p/2]).
  134. background_knowledge(shorter/2, [s/2]).
  135. % ground_peano/1 is added here so it's reported by list_mil_problem/1
  136. background_knowledge(ordered/3, [ground_peano/1]).
  137. background_knowledge(ordered_leq/3, [leq/2,ground_peano/1]).
  138.  
  139. % Positive examples are declared separately for each target. They are
  140. % added to the BK during learning, so any learning target can be defined
  141. % in terms of these partial, extensional definitions of other targets.
  142. positive_example(llength/2,E):-
  143.         member(E, [llength([],0)
  144.                   ,llength([a],s(0))
  145.                   ]
  146.               ).
  147. positive_example(shorter/2,shorter(Xs,Ys)):-
  148.         member(Xs-Ys,[[a]-[b,c]
  149.              ,[1,2]-[3,4,5]
  150.                      ]
  151.               ).
  152. positive_example(ordered/3,ordered(S1,S2,S3)):-
  153.         member([S1,S2,S3],[[[a],[b,c],[d,e,f]]
  154.                           ]
  155.               ).
  156. positive_example(ordered_leq/3,ordered_leq(S1,S2,S3)):-
  157.         member([S1,S2,S3],[[[a],[b],[d]]
  158.               ,[[a],[b,c],[d,e,f]]
  159.               ,[[a],[c],[e,f,g,h,i]]
  160.                           ]
  161.               ).
  162.  
  163. % Negative examples
  164. negative_example(shorter/2,_):-
  165.         fail.
  166. negative_example(llength/2,_):-
  167.         fail.
  168. negative_example(ordered/3,ordered(S1,S2,S3)):-
  169. % These are not strictly needed - but I add them to address Veedrac's
  170. % concern that Louise is not learning a sound program for ordered/2.
  171.         member([S1,S2,S3],[[[a],[b],[c]]
  172.               ,[[a,b,c],[a,b],[c]]
  173.               ]
  174.               ).
  175. negative_example(ordered_leq/3,ordered(S1,S2,S3)):-
  176. % These are not strictly needed - but I add them to address Veedrac's
  177. % concern that Louise is not learning a sound program for ordered/2.
  178.         member([S1,S2,S3],[[[a,b,c],[a,b],[c]]
  179.               ]
  180.               ).
  181.  
  182.  
  183. % ================================================================================
  184. % Background knowledge predicate definitions.
  185. %
  186. % BK in Louise consists of arbitrary Prolog programs. Strictly speaking,
  187. % they must have definite datalog heads, but this is not enforced in
  188. % practice. I like to see BK as a kind of programming library,
  189. % containing sub-routines, sub-programs from which the target programs
  190. % can be composed.
  191. %
  192. % Metarules on the other hand must be definite datalog and Louise _does_
  193. % enforce this restriction. The predicates below are used to allow
  194. % Louise to process non-datalog examples, i.e. examples with function
  195. % symbols in their head or body literals. In a sense, the function
  196. % symbols are "hidden away" in the BK. The technique is similar to
  197. % "flattening": Rouveirol, C, Machine Learning, 1994:
  198. %
  199. % https://paperity.org/p/7456046/flattening-and-saturation-two-representation-changes-for-generalization
  200. %
  201. % The predicates below have universal applicability for problems
  202. % involving lists and numbers. head/2 and tail/2 split a list, s/2 and
  203. % p/2 are used to "dereference" Peano number functions.
  204. %
  205. % Note hat head/2 is not actually used above.
  206. % ================================================================================
  207.  
  208. %!      head(?List,?Head) is det.
  209. %
  210. %       True when Head is the head of list List.
  211. %
  212. head([H|_T],H).
  213.  
  214. %!      tail(?List,?Tail) is det.
  215. %
  216. %       True when Tail is the tail of list List.
  217. %
  218. tail([_H|T],T).
  219.  
  220.  
  221. %!  leq(?N,?M) is nondet.
  222. %
  223. %   True when N =< M, where N, M are Peano integers.
  224. %
  225. leq(X,X):-
  226.     ground_peano(X).
  227. leq(X,Y):-
  228.     s(X,Y).
  229.  
  230.  
  231. %!      p(?Number, ?Predecessor) is nondet.
  232. %
  233. %       A Peano Number and its Predecessor.
  234. %
  235. p(s(N),0):-
  236.     ground_peano(N).
  237. p(s(N),N).
  238. p(s(N),s(M)):-
  239.     ground_peano(N)
  240.         ,p(N,M).
  241.  
  242.  
  243. %!      s(?Number, ?Successor) is nondet.
  244. %
  245. %       A Peano Number and its Successor.
  246. %
  247.  
  248. s(0,s(N)):-
  249.     ground_peano(N).
  250. s(N,s(N)).
  251. s(s(M),s(N)):-
  252.     ground_peano(N)
  253.         ,s(M,N).
  254.  
  255.  
  256. %!  ground_peano(?N) is det.
  257. %
  258. %   True when N is a ground Peano number.
  259. %
  260. ground_peano(X):-
  261.     ground(X)
  262.     ,\+ is_list(X).
  263.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement