Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 21 Jun 2019 23:08:45 +0000 (UTC)
From:      Sunpoet Po-Chuan Hsieh <sunpoet@FreeBSD.org>
To:        ports-committers@freebsd.org, svn-ports-all@freebsd.org, svn-ports-head@freebsd.org
Subject:   svn commit: r504818 - in head/math: . py-gym
Message-ID:  <201906212308.x5LN8jtm061852@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: sunpoet
Date: Fri Jun 21 23:08:45 2019
New Revision: 504818
URL: https://svnweb.freebsd.org/changeset/ports/504818

Log:
  Add py-gym 0.12.5
  
  OpenAI Gym is a toolkit for developing and comparing reinforcement learning
  algorithms. This is the gym open-source library, which gives you access to a
  standardized set of environments.
  
  gym makes no assumptions about the structure of your agent, and is compatible
  with any numerical computation library, such as TensorFlow or Theano. You can
  use it from Python code, and soon from other languages.
  
  There are two basic concepts in reinforcement learning: the environment (namely,
  the outside world) and the agent (namely, the algorithm you are writing). The
  agent sends actions to the environment, and the environment replies with
  observations and rewards (that is, a score).
  
  The core gym interface is Env, which is the unified environment interface. There
  is no interface for agents; that part is left to you. The following are the Env
  methods you should know:
  - reset(self): Reset the environment's state. Returns observation.
  - step(self, action): Step the environment by one timestep. Returns observation,
    reward, done, info.
  - render(self, mode='human'): Render one frame of the environment. The default
    mode will do something human friendly, such as pop up a window.
  
  WWW: https://gym.openai.com/
  WWW: https://github.com/openai/gym

Added:
  head/math/py-gym/
  head/math/py-gym/Makefile   (contents, props changed)
  head/math/py-gym/distinfo   (contents, props changed)
  head/math/py-gym/pkg-descr   (contents, props changed)
Modified:
  head/math/Makefile

Modified: head/math/Makefile
==============================================================================
--- head/math/Makefile	Fri Jun 21 23:08:38 2019	(r504817)
+++ head/math/Makefile	Fri Jun 21 23:08:45 2019	(r504818)
@@ -707,6 +707,7 @@
     SUBDIR += py-gnuplot
     SUBDIR += py-grandalf
     SUBDIR += py-graphillion
+    SUBDIR += py-gym
     SUBDIR += py-igakit
     SUBDIR += py-igraph
     SUBDIR += py-intspan

Added: head/math/py-gym/Makefile
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/math/py-gym/Makefile	Fri Jun 21 23:08:45 2019	(r504818)
@@ -0,0 +1,27 @@
+# Created by: Po-Chuan Hsieh <sunpoet@FreeBSD.org>
+# $FreeBSD$
+
+PORTNAME=	gym
+PORTVERSION=	0.12.5
+CATEGORIES=	math python
+MASTER_SITES=	CHEESESHOP
+PKGNAMEPREFIX=	${PYTHON_PKGNAMEPREFIX}
+
+MAINTAINER=	sunpoet@FreeBSD.org
+COMMENT=	OpenAI toolkit for developing and comparing your reinforcement learning agents
+
+LICENSE=	MIT
+
+RUN_DEPENDS=	${PYTHON_PKGNAMEPREFIX}numpy>=1.10.4:math/py-numpy@${PY_FLAVOR} \
+		${PYTHON_PKGNAMEPREFIX}pyglet>=0:graphics/py-pyglet@${PY_FLAVOR} \
+		${PYTHON_PKGNAMEPREFIX}scipy>=0:science/py-scipy@${PY_FLAVOR} \
+		${PYTHON_PKGNAMEPREFIX}six>=0:devel/py-six@${PY_FLAVOR}
+TEST_DEPENDS=	${PYTHON_PKGNAMEPREFIX}mock>=0:devel/py-mock@${PY_FLAVOR} \
+		${PYTHON_PKGNAMEPREFIX}pytest>=0:devel/py-pytest@${PY_FLAVOR}
+
+USES=		python
+USE_PYTHON=	autoplist concurrent distutils
+
+NO_ARCH=	yes
+
+.include <bsd.port.mk>

Added: head/math/py-gym/distinfo
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/math/py-gym/distinfo	Fri Jun 21 23:08:45 2019	(r504818)
@@ -0,0 +1,3 @@
+TIMESTAMP = 1561148961
+SHA256 (gym-0.12.5.tar.gz) = 027422f59b662748eae3420b804e35bbf953f62d40cd96d2de9f842c08de822e
+SIZE (gym-0.12.5.tar.gz) = 1544308

Added: head/math/py-gym/pkg-descr
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ head/math/py-gym/pkg-descr	Fri Jun 21 23:08:45 2019	(r504818)
@@ -0,0 +1,24 @@
+OpenAI Gym is a toolkit for developing and comparing reinforcement learning
+algorithms. This is the gym open-source library, which gives you access to a
+standardized set of environments.
+
+gym makes no assumptions about the structure of your agent, and is compatible
+with any numerical computation library, such as TensorFlow or Theano. You can
+use it from Python code, and soon from other languages.
+
+There are two basic concepts in reinforcement learning: the environment (namely,
+the outside world) and the agent (namely, the algorithm you are writing). The
+agent sends actions to the environment, and the environment replies with
+observations and rewards (that is, a score).
+
+The core gym interface is Env, which is the unified environment interface. There
+is no interface for agents; that part is left to you. The following are the Env
+methods you should know:
+- reset(self): Reset the environment's state. Returns observation.
+- step(self, action): Step the environment by one timestep. Returns observation,
+  reward, done, info.
+- render(self, mode='human'): Render one frame of the environment. The default
+  mode will do something human friendly, such as pop up a window.
+
+WWW: https://gym.openai.com/
+WWW: https://github.com/openai/gym



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201906212308.x5LN8jtm061852>