Agent's Learning Concept for Negation

에이전트의 부정에 대한 개념 학습

  • Published : 2000.05.15

Abstract

One of the hidden problems in a domain theory is that an agent does not understand the meaning of its action. Graphplan uses mutex to improve efficiency, but it does not understand negation and suffers from a redundancy problem. Introducing a negative function not in IPP partially helps to solve this kind of problem. However, using a negative function comes with its own price in terms of time and space. Observing that a human utilizes opposite concept to negate a fact based on MDL principle, we hypothesize that using a positive atom rather than a negative function to represent a negative fact is a more efficient technique for building an intelligent agent. We show empirical results supporting our hypothesis in IPP domains. To autonomously learn the human-like concept, we generate a cycle composed of opposite operators from a domain theory and extract opposite literals through experimenting with the operators.

영역이론의 숨겨진 문제점들 중의 하나는 에이전트가 자신의 행위를 이해하지 못한다는 점이다. Graphplan은 효율향상을 위해 mutex를 활용하고 있지만 이와 관련된 부정의 의미를 이해하지 못함으로써 영역이론의 중복성 문제를 야기한다. 이 문제에 대한 해결을 위해 IPP에서는 not 등과 같은 부정함수를 이용하지만, 부정함수의 사용은 시간과 공간적 비용을 수반한다. 인간은 주어진 어떤 사실을 부정하기 위하여 MDL 원리에 의해 반대개념을 사용한다는 점을 통하여, 우리는 부정적 사실을 표현하기 위해서 통념적 방식처럼 부정함수를 사용하는 것보다 긍정적 atom을 사용하는 것이 지능에이전트의 구축을 위해서 더 효율적 기법이라는 가설을 제시하고 IPP 도메인에서 이 가설을 지지하는 실험적 결과를 제시한다. 인간이 사용하는 것과 유사한 반대개념을 에이전트가 자동적으로 학습하기 위하여 영역이론으로부터 반대연산자들로 구성된 사이클을 생성하고 연산자들에 대한 실험을 통해서 반대 literal들을 추출한다.

Keywords

References

  1. Brum, A. L. and Furst, M L., Fast Planning through Planning Graph Analysis, in Artificial Intelligence 90(1-2): 281-300, 1997 https://doi.org/10.1016/S0004-3702(96)00047-1
  2. Koehler, J. Extending Planning Graphs to an ADL Subset, Proc. 4th European Conference on Planning, 1997
  3. Kautz, H. and Salman, B. Pushing the Envelope: Planning, Propositional Logic, and Stochastic Search, Proc. of 13th Nat. Conf. AI, 1996
  4. Rissanen, J. Stochastic Complexity in Statistical Inquiry World Scientific Publishing Company, 1989
  5. Fikes, R. and Nilsson, N. STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving, in Artificial Intelligence 2, 1971
  6. Tae, K. S., and Cook, D. Experimental Knowledge Acquisition for Planning, in Proceedings of the 13th International Conference on Machine Learning, 1996
  7. DesJardins, M. Knowledge Development Methods for Planning Systems, in AAAI-94 Fall Symposium Series: Planning and Learning: On to Real Applications, 1994
  8. Knoblock, C. Automatically Generating Abstractions for Planning, in Artificial Intelligence, 68, 1994 https://doi.org/10.1016/0004-3702(94)90069-8
  9. Carbonell, J. G., Blythe, J., Etzioni, O., Gil, Y., Knoblock, C., Minton, S., Perez, A., and Wang, X. PRODIGY 4.0: The Manual and Tutorial. Technical Report CMU-CS-92-150, Carnegie Mellon University, Pittsburgh, PA, 1992
  10. Kambhampati, R. and Lambrecht, E, Parker, E. Understanding and extending Graphplan, Proc. 4th European Conference on Planning, 1997
  11. Tae, K. S., Cook, D. and Holder, L. B. Experimentation-Driven Knowledge Acquisition for Planning, to appear in Computational Intelligence 15(3), 1999
  12. Fox, M and Long, D. The Automatic Inference of State Invariants in TIM, 1998
  13. Gil, Y. Acquiring Domain Knowledge for Planning by Experimentation. Ph.D. Dissertation., Carnegie Mellon Univ. 1992
  14. Wang, X. 1995 Learning by Observation and Practice: An Incremental Approach for Planning Operator Acquisition, in Proceedings of the 12th International Conference on Machine Learning, 1995
  15. Weld, D. Recent Advances in AI Planning, AI Magazine, 1999
  16. Smith, D. and Weld, D. Conformant Graphplan, in Proceedings of 15th Nat. Conf. AI, 1998