Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 10 Apr 2013 12:52:25 GMT
From:      Jason Bacon <jwbacon@tds.net>
To:        freebsd-gnats-submit@FreeBSD.org
Subject:   ports/177753: New port: sysutils/slurm-devel
Message-ID:  <201304101252.r3ACqPJU021343@red.freebsd.org>
Resent-Message-ID: <201304101300.r3AD01X3032208@freefall.freebsd.org>

next in thread | raw e-mail | index | archive | help

>Number:         177753
>Category:       ports
>Synopsis:       New port: sysutils/slurm-devel
>Confidential:   no
>Severity:       non-critical
>Priority:       low
>Responsible:    freebsd-ports-bugs
>State:          open
>Quarter:        
>Keywords:       
>Date-Required:
>Class:          change-request
>Submitter-Id:   current-users
>Arrival-Date:   Wed Apr 10 13:00:00 UTC 2013
>Closed-Date:
>Last-Modified:
>Originator:     Jason Bacon
>Release:        8.3-RELEASE
>Organization:
Acadix Consulting, LLC
>Environment:
FreeBSD sculpin.jbacon.dyndns.org 8.3-RELEASE FreeBSD 8.3-RELEASE #0: Mon Apr  9 21:23:18 UTC 2012     root@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64
>Description:
SLURM is an open-source resource manager designed for Linux clusters of all
sizes. It provides three key functions. First it allocates exclusive and/or
non-exclusive access to resources (computer nodes) to users for some duration
of time so they can perform work. Second, it provides a framework for starting,
executing, and monitoring work (typically a parallel job) on a set of allocated
nodes. Finally, it arbitrates contention for resources by managing a queue of
pending work.

This development port is being provided for testing purposes.  Please report any issues to the maintainer.

There is one known bug that causes slurmctld to crash when configured with CPUS>1 or Sockets>1.  A workaround is provided in the configuration file template files/slurm.conf.in.
>How-To-Repeat:

>Fix:


Patch attached with submission follows:

# This is a shell archive.  Save it in a file, remove anything before
# this line, and then unpack it by entering "sh file".  Note, it may
# create directories; files and directories will be owned by you and
# have default permissions.
#
# This archive contains:
#
#	slurm-hpc-devel
#	slurm-hpc-devel/files
#	slurm-hpc-devel/files/patch-src-common-xcgroup.c
#	slurm-hpc-devel/files/patch-src-common-xcgroup.h
#	slurm-hpc-devel/files/patch-src-common-slurm_jobacct_gather.c
#	slurm-hpc-devel/files/patch-src-slurmctld-gang.c
#	slurm-hpc-devel/files/patch-src-slurmctld-job_scheduler.c
#	slurm-hpc-devel/files/patch-src-slurmctld-node_scheduler.c
#	slurm-hpc-devel/files/patch-src-slurmctld-trigger_mgr.c
#	slurm-hpc-devel/files/patch-src-slurmd-common-setproctitle.c
#	slurm-hpc-devel/files/patch-src-slurmd-common-run_script.c
#	slurm-hpc-devel/files/patch-src-slurmd-slurmd-get_mach_stat.c
#	slurm-hpc-devel/files/patch-src-slurmd-slurmstepd-task.c
#	slurm-hpc-devel/files/patch-src-slurmd-slurmstepd-step_terminate_monitor.c
#	slurm-hpc-devel/files/patch-src-plugins-acct_gather_energy-rapl-acct_gather_energy_rapl.c
#	slurm-hpc-devel/files/patch-src-plugins-mpi-pmi2-agent.c
#	slurm-hpc-devel/files/patch-src-plugins-mpi-pmi2-pmi1.c
#	slurm-hpc-devel/files/patch-src-plugins-mpi-pmi2-pmi2.c
#	slurm-hpc-devel/files/patch-src-plugins-mpi-pmi2-setup.c
#	slurm-hpc-devel/files/patch-src-plugins-proctrack-linuxproc-proctrack_linuxproc.c
#	slurm-hpc-devel/files/patch-src-plugins-proctrack-cgroup-proctrack_cgroup.c
#	slurm-hpc-devel/files/patch-src-srun-libsrun-debugger.c
#	slurm-hpc-devel/files/slurmd.in
#	slurm-hpc-devel/files/patch-src-srun-libsrun-launch.c
#	slurm-hpc-devel/files/slurm.conf.in
#	slurm-hpc-devel/files/pkg-message.in
#	slurm-hpc-devel/files/slurmctld.in
#	slurm-hpc-devel/Makefile
#	slurm-hpc-devel/distinfo
#	slurm-hpc-devel/pkg-descr
#	slurm-hpc-devel/pkg-plist
#
echo c - slurm-hpc-devel
mkdir -p slurm-hpc-devel > /dev/null 2>&1
echo c - slurm-hpc-devel/files
mkdir -p slurm-hpc-devel/files > /dev/null 2>&1
echo x - slurm-hpc-devel/files/patch-src-common-xcgroup.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-common-xcgroup.c << 'b4927d2bce5d54493e7ef5c6d3fd3bbd'
X--- src/common/xcgroup.c.orig	2013-03-27 09:28:23.000000000 -0500
X+++ src/common/xcgroup.c	2013-03-27 09:29:08.000000000 -0500
X@@ -217,8 +217,13 @@
X 		options = opt_combined;
X 	}
X 
X+#if defined(__FreeBSD__)
X+	if (mount("cgroup", cgns->mnt_point,
X+		  MS_NOSUID|MS_NOEXEC|MS_NODEV, options))
X+#else
X 	if (mount("cgroup", cgns->mnt_point, "cgroup",
X 		  MS_NOSUID|MS_NOEXEC|MS_NODEV, options))
X+#endif
X 		return XCGROUP_ERROR;
X 	else {
X 		/* FIXME: this only gets set when we aren't mounted at
b4927d2bce5d54493e7ef5c6d3fd3bbd
echo x - slurm-hpc-devel/files/patch-src-common-xcgroup.h
sed 's/^X//' >slurm-hpc-devel/files/patch-src-common-xcgroup.h << '62cd645916fbc34c53a294b75cbfa867'
X--- src/common/xcgroup.h.orig	2013-03-08 13:29:51.000000000 -0600
X+++ src/common/xcgroup.h	2013-03-27 09:39:59.000000000 -0500
X@@ -48,6 +48,15 @@
X #define XCGROUP_ERROR    1
X #define XCGROUP_SUCCESS  0
X 
X+// http://lists.debian.org/debian-boot/2012/04/msg00047.html
X+#if defined(__FreeBSD__)
X+#define	MS_NOSUID	MNT_NOSUID
X+#define	MS_NOEXEC	MNT_NOEXEC
X+#define	MS_NODEV	0
X+
X+#define	umount(d)	unmount(d, 0)
X+#endif
X+
X typedef struct xcgroup_ns {
X 
X 	char* mnt_point;  /* mount point to use for the associated cgroup */
62cd645916fbc34c53a294b75cbfa867
echo x - slurm-hpc-devel/files/patch-src-common-slurm_jobacct_gather.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-common-slurm_jobacct_gather.c << '12ff779773c792e540448b3fe12376ea'
X--- src/common/slurm_jobacct_gather.c.orig	2013-03-08 13:29:51.000000000 -0600
X+++ src/common/slurm_jobacct_gather.c	2013-03-27 09:23:59.000000000 -0500
X@@ -47,6 +47,9 @@
X  *  	 Morris Jette, et al.
X \*****************************************************************************/
X 
X+#if defined(__FreeBSD__)
X+#include <signal.h>
X+#endif
X #include <pthread.h>
X #include <stdlib.h>
X #include <string.h>
12ff779773c792e540448b3fe12376ea
echo x - slurm-hpc-devel/files/patch-src-slurmctld-gang.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-slurmctld-gang.c << 'ca2604e4d805ec5fbc98cd3268fb9e2e'
X--- src/slurmctld/gang.c.orig	2013-03-08 13:29:51.000000000 -0600
X+++ src/slurmctld/gang.c	2013-03-27 09:23:59.000000000 -0500
X@@ -44,6 +44,9 @@
X #include <sys/types.h> /* for pid_t */
X #include <sys/signal.h> /* for SIGKILL */
X #endif
X+#if defined(__FreeBSD__)
X+#include <signal.h>
X+#endif
X #include <pthread.h>
X #include <unistd.h>
X 
ca2604e4d805ec5fbc98cd3268fb9e2e
echo x - slurm-hpc-devel/files/patch-src-slurmctld-job_scheduler.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-slurmctld-job_scheduler.c << 'e52c6dd6e3c04c5d2a375b2c3725a54b'
X--- src/slurmctld/job_scheduler.c.orig	2013-03-08 13:29:51.000000000 -0600
X+++ src/slurmctld/job_scheduler.c	2013-03-27 09:23:59.000000000 -0500
X@@ -46,6 +46,9 @@
X #include <sys/types.h> /* for pid_t */
X #include <sys/signal.h> /* for SIGKILL */
X #endif
X+#if defined(__FreeBSD__)
X+#include <signal.h> /* for SIGKILL */
X+#endif
X #include <errno.h>
X #include <stdio.h>
X #include <stdlib.h>
e52c6dd6e3c04c5d2a375b2c3725a54b
echo x - slurm-hpc-devel/files/patch-src-slurmctld-node_scheduler.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-slurmctld-node_scheduler.c << 'b553301e1c33d73843e51adcd1a1f793'
X--- src/slurmctld/node_scheduler.c.orig	2013-03-27 09:31:56.000000000 -0500
X+++ src/slurmctld/node_scheduler.c	2013-03-27 09:32:23.000000000 -0500
X@@ -51,6 +51,9 @@
X #include <sys/types.h> /* for pid_t */
X #include <sys/signal.h> /* for SIGKILL */
X #endif
X+#if defined(__FreeBSD__)
X+#include <signal.h>
X+#endif
X #include <errno.h>
X #include <pthread.h>
X #include <stdio.h>
b553301e1c33d73843e51adcd1a1f793
echo x - slurm-hpc-devel/files/patch-src-slurmctld-trigger_mgr.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-slurmctld-trigger_mgr.c << '751e6286cc8997d7863c5d6758ba5d53'
X--- src/slurmctld/trigger_mgr.c.orig	2013-03-27 09:33:16.000000000 -0500
X+++ src/slurmctld/trigger_mgr.c	2013-03-27 09:33:41.000000000 -0500
X@@ -49,6 +49,9 @@
X #include <sys/types.h> /* for pid_t */
X #include <sys/signal.h> /* for SIGKILL */
X #endif
X+#if defined(__FreeBSD__)
X+#include <signal.h>
X+#endif
X #include <errno.h>
X #include <fcntl.h>
X #include <grp.h>
751e6286cc8997d7863c5d6758ba5d53
echo x - slurm-hpc-devel/files/patch-src-slurmd-common-setproctitle.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-slurmd-common-setproctitle.c << '6abeac9e37da1f05eb6da367827d38bf'
X--- src/slurmd/common/setproctitle.c.orig	2013-03-27 09:43:43.000000000 -0500
X+++ src/slurmd/common/setproctitle.c	2013-03-27 09:53:07.000000000 -0500
X@@ -89,6 +89,9 @@
X #include <stdlib.h>
X #include <string.h>
X #endif
X+#if defined(__FreeBSD__)
X+#include <stdio.h>
X+#endif
X #ifndef HAVE_SETPROCTITLE
X #include <stdlib.h>
X #include <stdio.h>
X@@ -264,7 +267,7 @@
X 	save_argc = argc;
X 	save_argv = argv;
X 
X-#if defined(__NetBSD__)
X+#if defined(__NetBSD__) || defined(__FreeBSD__)
X 	setprogname (argv[0]);
X #else
X 	_init__progname (argv[0]);
6abeac9e37da1f05eb6da367827d38bf
echo x - slurm-hpc-devel/files/patch-src-slurmd-common-run_script.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-slurmd-common-run_script.c << '38a46ddb9c886093b6db1d798b30b89d'
X--- src/slurmd/common/run_script.c.orig	2013-03-27 09:46:40.000000000 -0500
X+++ src/slurmd/common/run_script.c	2013-03-27 09:47:15.000000000 -0500
X@@ -44,6 +44,9 @@
X #include <sys/types.h> /* for pid_t */
X #include <sys/signal.h> /* for SIGKILL */
X #endif
X+#if defined(__FreeBSD__)
X+#include <signal.h>
X+#endif
X #include <poll.h>
X #include <stdlib.h>
X #include <sys/wait.h>
38a46ddb9c886093b6db1d798b30b89d
echo x - slurm-hpc-devel/files/patch-src-slurmd-slurmd-get_mach_stat.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-slurmd-slurmd-get_mach_stat.c << '5e0fa6566329fc6ab3e594266d8f53c0'
X--- src/slurmd/slurmd/get_mach_stat.c.orig	2013-03-27 09:49:04.000000000 -0500
X+++ src/slurmd/slurmd/get_mach_stat.c	2013-03-27 09:49:31.000000000 -0500
X@@ -55,6 +55,9 @@
X #endif
X 
X #ifdef HAVE_SYS_SYSCTL_H
X+#if defined(__FreeBSD__)
X+#include <sys/types.h>
X+#endif
X # include <sys/sysctl.h>
X #endif
X 
5e0fa6566329fc6ab3e594266d8f53c0
echo x - slurm-hpc-devel/files/patch-src-slurmd-slurmstepd-task.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-slurmd-slurmstepd-task.c << 'f3870e15ce0f07180b526e426395e8a1'
X--- src/slurmd/slurmstepd/task.c.orig	2013-03-27 09:54:00.000000000 -0500
X+++ src/slurmd/slurmstepd/task.c	2013-03-27 09:59:07.000000000 -0500
X@@ -500,6 +500,11 @@
X 		 * has been around for a while.  So to make sure we
X 		 * still work with older systems we include this check.
X 		 */
X+
X+#if defined(__FreeBSD__)
X+#define	__GLIBC__ 		(1)
X+#define __GLIBC_PREREQ(a,b)	(1)
X+#endif
X #if defined __GLIBC__ && __GLIBC_PREREQ(2, 4)
X 		else if (eaccess(tmpdir, X_OK|W_OK)) /* check permissions */
X #else
f3870e15ce0f07180b526e426395e8a1
echo x - slurm-hpc-devel/files/patch-src-slurmd-slurmstepd-step_terminate_monitor.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-slurmd-slurmstepd-step_terminate_monitor.c << '0ae2d8a131771bef61156798dc855a76'
X--- src/slurmd/slurmstepd/step_terminate_monitor.c.orig	2013-03-27 09:59:59.000000000 -0500
X+++ src/slurmd/slurmstepd/step_terminate_monitor.c	2013-03-27 10:00:32.000000000 -0500
X@@ -39,6 +39,9 @@
X #include <sys/types.h> /* for pid_t */
X #include <sys/signal.h> /* for SIGKILL */
X #endif
X+#if defined(__FreeBSD__)
X+#include <signal.h>
X+#endif
X #include <stdlib.h>
X #include <sys/wait.h>
X #include <sys/errno.h>
0ae2d8a131771bef61156798dc855a76
echo x - slurm-hpc-devel/files/patch-src-plugins-acct_gather_energy-rapl-acct_gather_energy_rapl.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-plugins-acct_gather_energy-rapl-acct_gather_energy_rapl.c << 'ee4876cb6d1dca4cc0b1aaf6f93d62f1'
X--- src/plugins/acct_gather_energy/rapl/acct_gather_energy_rapl.c.orig	2013-03-27 10:02:06.000000000 -0500
X+++ src/plugins/acct_gather_energy/rapl/acct_gather_energy_rapl.c	2013-03-27 10:05:22.000000000 -0500
X@@ -67,6 +67,11 @@
X #include <math.h>
X #include "acct_gather_energy_rapl.h"
X 
X+/* From Linux sys/types.h */
X+#if defined(__FreeBSD__)
X+typedef unsigned long int	ulong;
X+#endif
X+
X union {
X 	uint64_t val;
X 	struct {
ee4876cb6d1dca4cc0b1aaf6f93d62f1
echo x - slurm-hpc-devel/files/patch-src-plugins-mpi-pmi2-agent.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-plugins-mpi-pmi2-agent.c << '9c264ffd4f18f45f2839ddd62fff2281'
X--- src/plugins/mpi/pmi2/agent.c.orig	2013-03-27 10:07:23.000000000 -0500
X+++ src/plugins/mpi/pmi2/agent.c	2013-03-27 10:52:44.000000000 -0500
X@@ -39,6 +39,11 @@
X #  include "config.h"
X #endif
X 
X+#if defined(__FreeBSD__)
X+#include <roken.h>
X+#include <sys/socket.h>	/* AF_INET */
X+#endif
X+
X #include <fcntl.h>
X #include <signal.h>
X #include <sys/types.h>
9c264ffd4f18f45f2839ddd62fff2281
echo x - slurm-hpc-devel/files/patch-src-plugins-mpi-pmi2-pmi1.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-plugins-mpi-pmi2-pmi1.c << '27c92a1c36fa3f78796fa31b8d93b520'
X--- src/plugins/mpi/pmi2/pmi1.c.orig	2013-03-27 10:54:19.000000000 -0500
X+++ src/plugins/mpi/pmi2/pmi1.c	2013-03-27 10:54:33.000000000 -0500
X@@ -39,6 +39,11 @@
X #  include "config.h"
X #endif
X 
X+#if defined(__FreeBSD__)
X+#include <roken.h>
X+#include <sys/socket.h> /* AF_INET */
X+#endif
X+
X #include <fcntl.h>
X #include <signal.h>
X #include <sys/types.h>
27c92a1c36fa3f78796fa31b8d93b520
echo x - slurm-hpc-devel/files/patch-src-plugins-mpi-pmi2-pmi2.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-plugins-mpi-pmi2-pmi2.c << '2a9c02ce37301138f16da8c7eb4015a8'
X--- src/plugins/mpi/pmi2/pmi2.c.orig	2013-03-27 10:54:51.000000000 -0500
X+++ src/plugins/mpi/pmi2/pmi2.c	2013-03-27 10:54:59.000000000 -0500
X@@ -39,6 +39,11 @@
X #  include "config.h"
X #endif
X 
X+#if defined(__FreeBSD__)
X+#include <roken.h>
X+#include <sys/socket.h> /* AF_INET */
X+#endif
X+
X #include <fcntl.h>
X #include <signal.h>
X #include <sys/types.h>
2a9c02ce37301138f16da8c7eb4015a8
echo x - slurm-hpc-devel/files/patch-src-plugins-mpi-pmi2-setup.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-plugins-mpi-pmi2-setup.c << 'ec1a6f0dc3ca883ea3a23c41d6feeac1'
X--- src/plugins/mpi/pmi2/setup.c.orig	2013-03-27 10:55:35.000000000 -0500
X+++ src/plugins/mpi/pmi2/setup.c	2013-03-27 10:56:04.000000000 -0500
X@@ -39,6 +39,10 @@
X #  include "config.h"
X #endif
X 
X+#if defined(__FreeBSD__)
X+#include <sys/socket.h>	/* AF_INET */
X+#endif
X+
X #include <fcntl.h>
X #include <signal.h>
X #include <sys/types.h>
ec1a6f0dc3ca883ea3a23c41d6feeac1
echo x - slurm-hpc-devel/files/patch-src-plugins-proctrack-linuxproc-proctrack_linuxproc.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-plugins-proctrack-linuxproc-proctrack_linuxproc.c << '2616448846819de019936ca0f13e0d42'
X--- src/plugins/proctrack/linuxproc/proctrack_linuxproc.c.orig	2013-03-27 10:56:36.000000000 -0500
X+++ src/plugins/proctrack/linuxproc/proctrack_linuxproc.c	2013-03-27 10:57:14.000000000 -0500
X@@ -51,6 +51,9 @@
X #include <sys/types.h> /* for pid_t */
X #include <sys/signal.h> /* for SIGKILL */
X #endif
X+#if defined(__FreeBSD__)
X+#include <signal.h>	/* SIGKILL */
X+#endif
X #include <sys/types.h>
X 
X #include "slurm/slurm.h"
2616448846819de019936ca0f13e0d42
echo x - slurm-hpc-devel/files/patch-src-plugins-proctrack-cgroup-proctrack_cgroup.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-plugins-proctrack-cgroup-proctrack_cgroup.c << '710d86eb2740b1f24b2516ed2fd8be4f'
X--- src/plugins/proctrack/cgroup/proctrack_cgroup.c.orig	2013-03-27 10:57:45.000000000 -0500
X+++ src/plugins/proctrack/cgroup/proctrack_cgroup.c	2013-03-27 10:58:04.000000000 -0500
X@@ -50,6 +50,10 @@
X #include <sys/signal.h> /* for SIGKILL */
X #endif
X 
X+#if defined(__FreeBSD__)
X+#include <signal.h>
X+#endif
X+
X #include "slurm/slurm.h"
X #include "slurm/slurm_errno.h"
X #include "src/common/log.h"
710d86eb2740b1f24b2516ed2fd8be4f
echo x - slurm-hpc-devel/files/patch-src-srun-libsrun-debugger.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-srun-libsrun-debugger.c << 'b1ad5f3dc8bf5d078d1510dd8b85d670'
X--- src/srun/libsrun/debugger.c.orig	2013-03-27 10:59:55.000000000 -0500
X+++ src/srun/libsrun/debugger.c	2013-03-27 11:00:39.000000000 -0500
X@@ -38,6 +38,10 @@
X  *  51 Franklin Street, Fifth Floor, Boston, MA 02110-1301  USA.
X \*****************************************************************************/
X 
X+#if defined(__FreeBSD__)
X+#include <signal.h>
X+#endif
X+
X #if HAVE_CONFIG_H
X #  include "config.h"
X #endif
b1ad5f3dc8bf5d078d1510dd8b85d670
echo x - slurm-hpc-devel/files/slurmd.in
sed 's/^X//' >slurm-hpc-devel/files/slurmd.in << '29f2c40185ac739caf591e8c962a18ca'
X#!/bin/sh
X
X# PROVIDE: slurmd
X# REQUIRE: DAEMON munge
X# BEFORE: LOGIN
X# KEYWORD: shutdown
X#
X# Add the following lines to /etc/rc.conf.local or /etc/rc.conf
X# to enable this service:
X#
X# slurmd_enable (bool):   Set to NO by default.
X#               Set it to YES to enable slurmd.
X#
X
X. /etc/rc.subr
X
Xname="slurmd"
Xrcvar=slurmd_enable
X
Xpidfile=/var/run/$name.pid
X
Xload_rc_config $name
X
X: ${slurmd_enable="NO"}
X
Xstart_cmd=slurmd_start
Xstop_cmd=slurmd_stop
X
Xslurmd_start() {
X    checkyesno slurmd_enable && echo "Starting $name." && \
X	%%PREFIX%%/sbin/$name $slurmd_flags
X}
X
Xslurmd_stop() {
X    if [ -e $pidfile ]; then
X        checkyesno slurmd_enable && echo "Stopping $name." && \
X	    kill `cat $pidfile`
X    else
X        killall $name
X    fi
X}
X
Xrun_rc_command "$1"
29f2c40185ac739caf591e8c962a18ca
echo x - slurm-hpc-devel/files/patch-src-srun-libsrun-launch.c
sed 's/^X//' >slurm-hpc-devel/files/patch-src-srun-libsrun-launch.c << 'dd5a2e34b503660811c223a4b8bfc620'
X--- src/srun/libsrun/launch.c.orig	2013-03-27 11:01:12.000000000 -0500
X+++ src/srun/libsrun/launch.c	2013-03-27 11:01:39.000000000 -0500
X@@ -36,6 +36,10 @@
X #include <stdlib.h>
X #include <fcntl.h>
X 
X+#if defined(__FreeBSD__)
X+#include <signal.h>
X+#endif
X+
X #include "launch.h"
X 
X #include "src/common/env.h"
dd5a2e34b503660811c223a4b8bfc620
echo x - slurm-hpc-devel/files/slurm.conf.in
sed 's/^X//' >slurm-hpc-devel/files/slurm.conf.in << '39b9510c9b06784424d0ee9386c81e81'
X# slurm.conf file generated by configurator.html.
X# Put this file on all nodes of your cluster.
X# See the slurm.conf man page for more information.
X#
XControlMachine=%%CONTROL_MACHINE%%
X#ControlAddr=
X#BackupController=%%BACKUP_CONTROL_MACHINE%%
X#BackupAddr=
X#
XAuthType=auth/munge
XCacheGroups=0
X#CheckpointType=checkpoint/none
XCryptoType=crypto/munge
X#DisableRootJobs=NO
X#EnforcePartLimits=NO
X#Epilog=
X#EpilogSlurmctld=
X#FirstJobId=1
X#MaxJobId=999999
X#GresTypes=
X#GroupUpdateForce=0
X#GroupUpdateTime=600
X#JobCheckpointDir=/var/slurm/checkpoint
X#JobCredentialPrivateKey=
X#JobCredentialPublicCertificate=
X#JobFileAppend=0
X#JobRequeue=1
X#JobSubmitPlugins=1
X#KillOnBadExit=0
X#LaunchType=launch/slurm
X#Licenses=foo*4,bar
XMailProg=/usr/bin/mail
X#MaxJobCount=5000
X#MaxStepCount=40000
X#MaxTasksPerNode=128
XMpiDefault=none
X#MpiParams=ports=#-#
X#PluginDir=
X#PlugStackConfig=
X#PrivateData=jobs
XProctrackType=proctrack/pgid
X#Prolog=
X#PrologSlurmctld=
X#PropagatePrioProcess=0
X#PropagateResourceLimits=
X# Prevent head node limits from being applied to jobs!
XPropagateResourceLimitsExcept=ALL
X#RebootProgram=
XReturnToService=1
X#SallocDefaultCommand=
XSlurmctldPidFile=/var/run/slurmctld.pid
XSlurmctldPort=6817
XSlurmdPidFile=/var/run/slurmd.pid
XSlurmdPort=6818
XSlurmdSpoolDir=/var/spool/slurmd
XSlurmUser=slurm
X#SlurmdUser=root
X#SrunEpilog=
X#SrunProlog=
XStateSaveLocation=/home/slurm/slurmctld
XSwitchType=switch/none
X#TaskEpilog=
XTaskPlugin=task/none
X#TaskPluginParam=
X#TaskProlog=
X#TopologyPlugin=topology/tree
X#TmpFs=/tmp
X#TrackWCKey=no
X#TreeWidth=
X#UnkillableStepProgram=
X#UsePAM=0
X#
X#
X# TIMERS
X#BatchStartTimeout=10
X#CompleteWait=0
X#EpilogMsgTime=2000
X#GetEnvTimeout=2
X#HealthCheckInterval=0
X#HealthCheckProgram=
XInactiveLimit=0
XKillWait=30
X#MessageTimeout=10
X#ResvOverRun=0
XMinJobAge=300
X#OverTimeLimit=0
XSlurmctldTimeout=120
XSlurmdTimeout=300
X#UnkillableStepTimeout=60
X#VSizeFactor=0
XWaittime=0
X#
X#
X# SCHEDULING
X#DefMemPerCPU=0
XFastSchedule=1
X#MaxMemPerCPU=0
X#SchedulerRootFilter=1
X#SchedulerTimeSlice=30
XSchedulerType=sched/backfill
XSchedulerPort=7321
XSelectType=select/cons_res
X#SelectTypeParameters=
X#
X#
X# JOB PRIORITY
X#PriorityType=priority/basic
X#PriorityDecayHalfLife=
X#PriorityCalcPeriod=
X#PriorityFavorSmall=
X#PriorityMaxAge=
X#PriorityUsageResetPeriod=
X#PriorityWeightAge=
X#PriorityWeightFairshare=
X#PriorityWeightJobSize=
X#PriorityWeightPartition=
X#PriorityWeightQOS=
X#
X#
X# LOGGING AND ACCOUNTING
X#AccountingStorageEnforce=0
X#AccountingStorageHost=
X#AccountingStorageLoc=
X#AccountingStoragePass=
X#AccountingStoragePort=
XAccountingStorageType=accounting_storage/none
X#AccountingStorageUser=
XAccountingStoreJobComment=YES
XClusterName=cluster
X#DebugFlags=
X#JobCompHost=
X#JobCompLoc=
X#JobCompPass=
X#JobCompPort=
XJobCompType=jobcomp/none
X#JobCompUser=
XJobAcctGatherFrequency=30
XJobAcctGatherType=jobacct_gather/none
XSlurmctldDebug=5
XSlurmctldLogFile=/var/log/slurmctld
XSlurmdDebug=5
XSlurmdLogFile=/var/log/slurmd
X#SlurmSchedLogFile=
X#SlurmSchedLogLevel=
X#
X#
X# POWER SAVE SUPPORT FOR IDLE NODES (optional)
X#SuspendProgram=
X#ResumeProgram=
X#SuspendTimeout=
X#ResumeTimeout=
X#ResumeRate=
X#SuspendExcNodes=
X#SuspendExcParts=
X#SuspendRate=
X#SuspendTime=
X#
X#
X# COMPUTE NODES
X
X#############################################################################
X# Note: Using CPUs=2 or Sockets=2 causes slurmctld 2.5.4 to seg fault on
X#       FreeBSD.
X#       Use Sockets=1, CoresPerSocket=total-cores-in-node, and
X#       ThreadsPerCore=N, even if your motherboard has more than 1 socket.
X#       This problem is being investigated by the slurm developers.
X#############################################################################
X
XNodeName=compute-[001-002] Sockets=1 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN
XPartitionName=default-partition Nodes=compute-[001-002] Default=YES MaxTime=INFINITE State=UP
39b9510c9b06784424d0ee9386c81e81
echo x - slurm-hpc-devel/files/pkg-message.in
sed 's/^X//' >slurm-hpc-devel/files/pkg-message.in << '80be8cff9e02531aecd4fd4e810a73be'
X
X-------------------------------------------------------------
XA sample configuration file is provided in
X
X    %%EXAMPLESDIR%%/slurm.conf
X
XA similar file must be installed in
X
X    %%PREFIX%%/etc
X
Xon the controller node in order for slurmctld to function.
X-------------------------------------------------------------
X
80be8cff9e02531aecd4fd4e810a73be
echo x - slurm-hpc-devel/files/slurmctld.in
sed 's/^X//' >slurm-hpc-devel/files/slurmctld.in << 'eb3c2a1298d7acae961341c4bc24f316'
X#!/bin/sh
X
X# PROVIDE: slurmctld
X# REQUIRE: DAEMON munge
X# BEFORE: LOGIN
X# KEYWORD: shutdown
X#
X# Add the following lines to /etc/rc.conf.local or /etc/rc.conf
X# to enable this service:
X#
X# slurmctld_enable (bool):   Set to NO by default.
X#               Set it to YES to enable slurmctld.
X#
X
X. /etc/rc.subr
X
Xname="slurmctld"
Xrcvar=slurmctld_enable
X
Xpidfile=/var/run/$name.pid
X
Xload_rc_config $name
X
X: ${slurmctld_enable="NO"}
X
Xstart_cmd=slurmctld_start
Xstop_cmd=slurmctld_stop
X
Xslurmctld_start() {
X    checkyesno slurmctld_enable && echo "Starting $name." && \
X	%%PREFIX%%/sbin/$name $slurmctld_flags
X}
X
Xslurmctld_stop() {
X    if [ -e $pidfile ]; then
X        checkyesno slurmctld_enable && echo "Stopping $name." && \
X	    kill `cat $pidfile`
X    else
X	killall $name
X    fi
X}
X
Xrun_rc_command "$1"
eb3c2a1298d7acae961341c4bc24f316
echo x - slurm-hpc-devel/Makefile
sed 's/^X//' >slurm-hpc-devel/Makefile << 'f1b7d3fb34c28aa279749178dc1d8854'
X# Created by:	Jason Bacon
X# $FreeBSD$
X
XPORTNAME=	slurm
XPORTVERSION=	2.5.4
XCATEGORIES=	sysutils
XMASTER_SITES=	http://www.schedmd.com/download/archive/ \
X		http://www.schedmd.com/download/latest/ \
X		http://www.schedmd.com/download/development/
X
XMAINTAINER=	jwbacon@tds.net
XCOMMENT=	Simple Linux Resource Manager
X
XLICENSE=	GPLv1
X
XCONFLICTS_INSTALL=	slurm-2.6*
X
X# slurmctld default port = 6817
X# slurmd default port = 6818
X# wiki/wiki2 default port = 7321
X
XBUILD_DEPENDS+=	${LOCALBASE}/include/sys/sysinfo.h:${PORTSDIR}/devel/libsysinfo
XRUN_DEPENDS+=	munge:${PORTSDIR}/security/munge
X
X# OPTIONS	mysql, totalview, padb, hostlist, qsnetlibs+libelanhosts
X#		io-watchdog, pam-slurm, sqlog
X
X# Install munge.key to all nodes and start munge daemons before slurm
X# RUN_DEPENDS=	munge
X# Create SlurmUser (Unix: slurm)
X# Use doc/html/configurator.html to generate slurm.conf
X# install into sysconfdir
X# start slurm daemons
X
XUSE_BZIP2=	yes
XUSE_LDCONFIG=	yes
XUSE_MYSQL=	yes
XGNU_CONFIGURE=	yes
X
X# Maximize debugging info until further notice
XCFLAGS=		-I${LOCALBASE}/include -g -O0
XLDFLAGS+=	-L${LOCALBASE}/lib -lsysinfo -lkvm
X
XSUB_FILES=	slurm.conf pkg-message
X
XUSERS=		slurm
XGROUPS=		${USERS}
X
XUSE_RC_SUBR=	slurmctld slurmd
X
XMAN1=	\
X	sacct.1 \
X	sacctmgr.1 \
X	salloc.1 \
X	sattach.1 \
X	sbatch.1 \
X	sbcast.1 \
X	scancel.1 \
X	scontrol.1 \
X	sdiag.1 \
X	sinfo.1 \
X	slurm.1 \
X	smap.1 \
X	sprio.1 \
X	squeue.1 \
X	sreport.1 \
X	srun.1 \
X	srun_cr.1 \
X	sshare.1 \
X	sstat.1 \
X	strigger.1 \
X	sview.1
X
XMAN3=	\
X	slurm_allocate_resources.3 \
X	slurm_allocate_resources_blocking.3 \
X	slurm_allocation_lookup.3 \
X	slurm_allocation_lookup_lite.3 \
X	slurm_allocation_msg_thr_create.3 \
X	slurm_allocation_msg_thr_destroy.3 \
X	slurm_api_version.3 \
X	slurm_checkpoint.3 \
X	slurm_checkpoint_able.3 \
X	slurm_checkpoint_complete.3 \
X	slurm_checkpoint_create.3 \
X	slurm_checkpoint_disable.3 \
X	slurm_checkpoint_enable.3 \
X	slurm_checkpoint_error.3 \
X	slurm_checkpoint_failed.3 \
X	slurm_checkpoint_restart.3 \
X	slurm_checkpoint_task_complete.3 \
X	slurm_checkpoint_tasks.3 \
X	slurm_checkpoint_vacate.3 \
X	slurm_clear_trigger.3 \
X	slurm_complete_job.3 \
X	slurm_confirm_allocation.3 \
X	slurm_create_partition.3 \
X	slurm_create_reservation.3 \
X	slurm_delete_partition.3 \
X	slurm_delete_reservation.3 \
X	slurm_free_ctl_conf.3 \
X	slurm_free_front_end_info_msg.3 \
X	slurm_free_job_alloc_info_response_msg.3 \
X	slurm_free_job_info_msg.3 \
X	slurm_free_job_step_create_response_msg.3 \
X	slurm_free_job_step_info_response_msg.3 \
X	slurm_free_node_info.3 \
X	slurm_free_node_info_msg.3 \
X	slurm_free_partition_info.3 \
X	slurm_free_partition_info_msg.3 \
X	slurm_free_reservation_info_msg.3 \
X	slurm_free_resource_allocation_response_msg.3 \
X	slurm_free_slurmd_status.3 \
X	slurm_free_submit_response_response_msg.3 \
X	slurm_free_trigger_msg.3 \
X	slurm_get_end_time.3 \
X	slurm_get_errno.3 \
X	slurm_get_job_steps.3 \
X	slurm_get_rem_time.3 \
X	slurm_get_select_jobinfo.3 \
X	slurm_get_triggers.3 \
X	slurm_hostlist_create.3 \
X	slurm_hostlist_destroy.3 \
X	slurm_hostlist_shift.3 \
X	slurm_init_job_desc_msg.3 \
X	slurm_init_part_desc_msg.3 \
X	slurm_init_resv_desc_msg.3 \
X	slurm_init_trigger_msg.3 \
X	slurm_init_update_front_end_msg.3 \
X	slurm_init_update_node_msg.3 \
X	slurm_init_update_step_msg.3 \
X	slurm_job_cpus_allocated_on_node.3 \
X	slurm_job_cpus_allocated_on_node_id.3 \
X	slurm_job_step_create.3 \
X	slurm_job_step_launch_t_init.3 \
X	slurm_job_step_layout_free.3 \
X	slurm_job_step_layout_get.3 \
X	slurm_job_will_run.3 \
X	slurm_jobinfo_ctx_get.3 \
X	slurm_kill_job.3 \
X	slurm_kill_job_step.3 \
X	slurm_load_ctl_conf.3 \
X	slurm_load_front_end.3 \
X	slurm_load_job.3 \
X	slurm_load_jobs.3 \
X	slurm_load_node.3 \
X	slurm_load_partitions.3 \
X	slurm_load_reservations.3 \
X	slurm_load_slurmd_status.3 \
X	slurm_notify_job.3 \
X	slurm_perror.3 \
X	slurm_pid2jobid.3 \
X	slurm_ping.3 \
X	slurm_print_ctl_conf.3 \
X	slurm_print_front_end_info_msg.3 \
X	slurm_print_front_end_table.3 \
X	slurm_print_job_info.3 \
X	slurm_print_job_info_msg.3 \
X	slurm_print_job_step_info.3 \
X	slurm_print_job_step_info_msg.3 \
X	slurm_print_node_info_msg.3 \
X	slurm_print_node_table.3 \
X	slurm_print_partition_info.3 \
X	slurm_print_partition_info_msg.3 \
X	slurm_print_reservation_info.3 \
X	slurm_print_reservation_info_msg.3 \
X	slurm_print_slurmd_status.3 \
X	slurm_read_hostfile.3 \
X	slurm_reconfigure.3 \
X	slurm_requeue.3 \
X	slurm_resume.3 \
X	slurm_set_debug_level.3 \
X	slurm_set_trigger.3 \
X	slurm_shutdown.3 \
X	slurm_signal_job.3 \
X	slurm_signal_job_step.3 \
X	slurm_slurmd_status.3 \
X	slurm_sprint_front_end_table.3 \
X	slurm_sprint_job_info.3 \
X	slurm_sprint_job_step_info.3 \
X	slurm_sprint_node_table.3 \
X	slurm_sprint_partition_info.3 \
X	slurm_sprint_reservation_info.3 \
X	slurm_step_ctx_create.3 \
X	slurm_step_ctx_create_no_alloc.3 \
X	slurm_step_ctx_daemon_per_node_hack.3 \
X	slurm_step_ctx_destroy.3 \
X	slurm_step_ctx_get.3 \
X	slurm_step_ctx_params_t_init.3 \
X	slurm_step_launch.3 \
X	slurm_step_launch_abort.3 \
X	slurm_step_launch_fwd_signal.3 \
X	slurm_step_launch_wait_finish.3 \
X	slurm_step_launch_wait_start.3 \
X	slurm_strerror.3 \
X	slurm_submit_batch_job.3 \
X	slurm_suspend.3 \
X	slurm_takeover.3 \
X	slurm_terminate_job.3 \
X	slurm_terminate_job_step.3 \
X	slurm_update_front_end.3 \
X	slurm_update_job.3 \
X	slurm_update_node.3 \
X	slurm_update_partition.3 \
X	slurm_update_reservation.3 \
X	slurm_update_step.3
X
XMAN5=	\
X	bluegene.conf.5 \
X	cgroup.conf.5 \
X	cray.conf.5 \
X	gres.conf.5 \
X	slurm.conf.5 \
X	slurmdbd.conf.5 \
X	topology.conf.5 \
X	wiki.conf.5
X
XMAN8 =	\
X	slurmctld.8 \
X	slurmd.8 \
X	slurmdbd.8 \
X	slurmstepd.8 \
X	spank.8
X
Xpost-install:
X.ifdef(NOPORTDOCS)
X	${RM} -rf ${DOCSDIR}-${PORTVERSION}
X.endif
X.ifndef(NOPORTEXAMPLES)
X	${MKDIR} ${EXAMPLESDIR}
X	${INSTALL_DATA} ${WRKDIR}/slurm.conf ${EXAMPLESDIR}
X.endif
X	@${CAT} ${WRKDIR}/pkg-message
X
X.include <bsd.port.mk>
f1b7d3fb34c28aa279749178dc1d8854
echo x - slurm-hpc-devel/distinfo
sed 's/^X//' >slurm-hpc-devel/distinfo << 'b45bcc442a0acfa506f3d66080f803e8'
XSHA256 (slurm-2.5.4.tar.bz2) = c713ea74742ce14a27b88b02f1a475bc71cde22ad3a323a4d669530d8b68f09e
XSIZE (slurm-2.5.4.tar.bz2) = 5497719
b45bcc442a0acfa506f3d66080f803e8
echo x - slurm-hpc-devel/pkg-descr
sed 's/^X//' >slurm-hpc-devel/pkg-descr << '2ce523974851b34f5b185acd193d787d'
XSLURM is an open-source resource manager designed for Linux clusters of all
Xsizes. It provides three key functions. First it allocates exclusive and/or
Xnon-exclusive access to resources (computer nodes) to users for some duration
Xof time so they can perform work. Second, it provides a framework for starting,
Xexecuting, and monitoring work (typically a parallel job) on a set of allocated
Xnodes. Finally, it arbitrates contention for resources by managing a queue of
Xpending work.
X
XWWW:	https://computing.llnl.gov/linux/slurm/
2ce523974851b34f5b185acd193d787d
echo x - slurm-hpc-devel/pkg-plist
sed 's/^X//' >slurm-hpc-devel/pkg-plist << 'a53945c2e5c91142146a599d60f75628'
Xbin/sacct
Xbin/sacctmgr
Xbin/salloc
Xbin/sattach
Xbin/sbatch
Xbin/sbcast
Xbin/scancel
Xbin/scontrol
Xbin/sdiag
Xbin/sinfo
Xbin/smap
Xbin/sprio
Xbin/squeue
Xbin/sreport
Xbin/srun
Xbin/sshare
Xbin/sstat
Xbin/strigger
Xbin/sview
Xinclude/slurm/pmi.h
Xinclude/slurm/slurm.h
Xinclude/slurm/slurm_errno.h
Xinclude/slurm/slurmdb.h
Xinclude/slurm/spank.h
Xlib/libpmi.a
Xlib/libpmi.la
Xlib/libpmi.so
Xlib/libpmi.so.0
Xlib/libslurm.a
Xlib/libslurm.la
Xlib/libslurm.so
Xlib/libslurm.so.25
Xlib/libslurmdb.a
Xlib/libslurmdb.la
Xlib/libslurmdb.so
Xlib/libslurmdb.so.25
Xlib/slurm/accounting_storage_filetxt.a
Xlib/slurm/accounting_storage_filetxt.la
Xlib/slurm/accounting_storage_filetxt.so
Xlib/slurm/accounting_storage_mysql.a
Xlib/slurm/accounting_storage_mysql.la
Xlib/slurm/accounting_storage_mysql.so
Xlib/slurm/accounting_storage_none.a
Xlib/slurm/accounting_storage_none.la
Xlib/slurm/accounting_storage_none.so
Xlib/slurm/accounting_storage_pgsql.a
Xlib/slurm/accounting_storage_pgsql.la
Xlib/slurm/accounting_storage_pgsql.so
Xlib/slurm/accounting_storage_slurmdbd.a
Xlib/slurm/accounting_storage_slurmdbd.la
Xlib/slurm/accounting_storage_slurmdbd.so
Xlib/slurm/acct_gather_energy_ipmi.a
Xlib/slurm/acct_gather_energy_ipmi.la
Xlib/slurm/acct_gather_energy_ipmi.so
Xlib/slurm/acct_gather_energy_none.a
Xlib/slurm/acct_gather_energy_none.la
Xlib/slurm/acct_gather_energy_none.so
Xlib/slurm/acct_gather_energy_rapl.a
Xlib/slurm/acct_gather_energy_rapl.la
Xlib/slurm/acct_gather_energy_rapl.so
Xlib/slurm/auth_munge.a
Xlib/slurm/auth_munge.la
Xlib/slurm/auth_munge.so
Xlib/slurm/auth_none.a
Xlib/slurm/auth_none.la
Xlib/slurm/auth_none.so
Xlib/slurm/checkpoint_none.a
Xlib/slurm/checkpoint_none.la
Xlib/slurm/checkpoint_none.so
Xlib/slurm/checkpoint_ompi.a
Xlib/slurm/checkpoint_ompi.la
Xlib/slurm/checkpoint_ompi.so
Xlib/slurm/crypto_munge.a
Xlib/slurm/crypto_munge.la
Xlib/slurm/crypto_munge.so
Xlib/slurm/crypto_openssl.a
Xlib/slurm/crypto_openssl.la
Xlib/slurm/crypto_openssl.so
Xlib/slurm/gres_gpu.a
Xlib/slurm/gres_gpu.la
Xlib/slurm/gres_gpu.so
Xlib/slurm/gres_mic.a
Xlib/slurm/gres_mic.la
Xlib/slurm/gres_mic.so
Xlib/slurm/gres_nic.a
Xlib/slurm/gres_nic.la
Xlib/slurm/gres_nic.so
Xlib/slurm/job_submit_all_partitions.a
Xlib/slurm/job_submit_all_partitions.la
Xlib/slurm/job_submit_all_partitions.so
Xlib/slurm/job_submit_cnode.a
Xlib/slurm/job_submit_cnode.la
Xlib/slurm/job_submit_cnode.so
Xlib/slurm/job_submit_defaults.a
Xlib/slurm/job_submit_defaults.la
Xlib/slurm/job_submit_defaults.so
Xlib/slurm/job_submit_logging.a
Xlib/slurm/job_submit_logging.la
Xlib/slurm/job_submit_logging.so
Xlib/slurm/job_submit_partition.a
Xlib/slurm/job_submit_partition.la
Xlib/slurm/job_submit_partition.so
Xlib/slurm/jobacct_gather_aix.a
Xlib/slurm/jobacct_gather_aix.la
Xlib/slurm/jobacct_gather_aix.so
Xlib/slurm/jobacct_gather_cgroup.a
Xlib/slurm/jobacct_gather_cgroup.la
Xlib/slurm/jobacct_gather_cgroup.so
Xlib/slurm/jobacct_gather_linux.a
Xlib/slurm/jobacct_gather_linux.la
Xlib/slurm/jobacct_gather_linux.so
Xlib/slurm/jobacct_gather_none.a
Xlib/slurm/jobacct_gather_none.la
Xlib/slurm/jobacct_gather_none.so
Xlib/slurm/jobcomp_filetxt.a
Xlib/slurm/jobcomp_filetxt.la
Xlib/slurm/jobcomp_filetxt.so
Xlib/slurm/jobcomp_mysql.a
Xlib/slurm/jobcomp_mysql.la
Xlib/slurm/jobcomp_mysql.so
Xlib/slurm/jobcomp_none.a
Xlib/slurm/jobcomp_none.la
Xlib/slurm/jobcomp_none.so
Xlib/slurm/jobcomp_pgsql.a
Xlib/slurm/jobcomp_pgsql.la
Xlib/slurm/jobcomp_pgsql.so
Xlib/slurm/jobcomp_script.a
Xlib/slurm/jobcomp_script.la
Xlib/slurm/jobcomp_script.so
Xlib/slurm/launch_slurm.a
Xlib/slurm/launch_slurm.la
Xlib/slurm/launch_slurm.so
Xlib/slurm/mpi_lam.a
Xlib/slurm/mpi_lam.la
Xlib/slurm/mpi_lam.so
Xlib/slurm/mpi_mpich1_p4.a
Xlib/slurm/mpi_mpich1_p4.la
Xlib/slurm/mpi_mpich1_p4.so
Xlib/slurm/mpi_mpich1_shmem.a
Xlib/slurm/mpi_mpich1_shmem.la
Xlib/slurm/mpi_mpich1_shmem.so
Xlib/slurm/mpi_mpichgm.a
Xlib/slurm/mpi_mpichgm.la
Xlib/slurm/mpi_mpichgm.so
Xlib/slurm/mpi_mpichmx.a
Xlib/slurm/mpi_mpichmx.la
Xlib/slurm/mpi_mpichmx.so
Xlib/slurm/mpi_mvapich.a
Xlib/slurm/mpi_mvapich.la
Xlib/slurm/mpi_mvapich.so
Xlib/slurm/mpi_none.a
Xlib/slurm/mpi_none.la
Xlib/slurm/mpi_none.so
Xlib/slurm/mpi_openmpi.a
Xlib/slurm/mpi_openmpi.la
Xlib/slurm/mpi_openmpi.so
Xlib/slurm/mpi_pmi2.a
Xlib/slurm/mpi_pmi2.la
Xlib/slurm/mpi_pmi2.so
Xlib/slurm/preempt_none.a
Xlib/slurm/preempt_none.la
Xlib/slurm/preempt_none.so
Xlib/slurm/preempt_partition_prio.a
Xlib/slurm/preempt_partition_prio.la
Xlib/slurm/preempt_partition_prio.so
Xlib/slurm/preempt_qos.a
Xlib/slurm/preempt_qos.la
Xlib/slurm/preempt_qos.so
Xlib/slurm/priority_basic.a
Xlib/slurm/priority_basic.la
Xlib/slurm/priority_basic.so
Xlib/slurm/priority_multifactor.a
Xlib/slurm/priority_multifactor.la
Xlib/slurm/priority_multifactor.so
Xlib/slurm/priority_multifactor2.a
Xlib/slurm/priority_multifactor2.la
Xlib/slurm/priority_multifactor2.so
Xlib/slurm/proctrack_cgroup.a
Xlib/slurm/proctrack_cgroup.la
Xlib/slurm/proctrack_cgroup.so
Xlib/slurm/proctrack_linuxproc.a
Xlib/slurm/proctrack_linuxproc.la
Xlib/slurm/proctrack_linuxproc.so
Xlib/slurm/proctrack_pgid.a
Xlib/slurm/proctrack_pgid.la
Xlib/slurm/proctrack_pgid.so
Xlib/slurm/sched_backfill.a
Xlib/slurm/sched_backfill.la
Xlib/slurm/sched_backfill.so
Xlib/slurm/sched_builtin.a
Xlib/slurm/sched_builtin.la
Xlib/slurm/sched_builtin.so
Xlib/slurm/sched_hold.a
Xlib/slurm/sched_hold.la
Xlib/slurm/sched_hold.so
Xlib/slurm/sched_wiki.a
Xlib/slurm/sched_wiki.la
Xlib/slurm/sched_wiki.so
Xlib/slurm/sched_wiki2.a
Xlib/slurm/sched_wiki2.la
Xlib/slurm/sched_wiki2.so
Xlib/slurm/select_cons_res.a
Xlib/slurm/select_cons_res.la
Xlib/slurm/select_cons_res.so
Xlib/slurm/select_cray.a
Xlib/slurm/select_cray.la
Xlib/slurm/select_cray.so
Xlib/slurm/select_linear.a
Xlib/slurm/select_linear.la
Xlib/slurm/select_linear.so
Xlib/slurm/select_serial.a
Xlib/slurm/select_serial.la
Xlib/slurm/select_serial.so
Xlib/slurm/src/sattach/sattach.wrapper.c
Xlib/slurm/src/srun/srun.wrapper.c
Xlib/slurm/switch_none.a
Xlib/slurm/switch_none.la
Xlib/slurm/switch_none.so
Xlib/slurm/task_cgroup.a
Xlib/slurm/task_cgroup.la
Xlib/slurm/task_cgroup.so
Xlib/slurm/task_none.a
Xlib/slurm/task_none.la
Xlib/slurm/task_none.so
Xlib/slurm/topology_3d_torus.a
Xlib/slurm/topology_3d_torus.la
Xlib/slurm/topology_3d_torus.so
Xlib/slurm/topology_node_rank.a
Xlib/slurm/topology_node_rank.la
Xlib/slurm/topology_node_rank.so
Xlib/slurm/topology_none.a
Xlib/slurm/topology_none.la
Xlib/slurm/topology_none.so
Xlib/slurm/topology_tree.a
Xlib/slurm/topology_tree.la
Xlib/slurm/topology_tree.so
Xsbin/slurmctld
Xsbin/slurmd
Xsbin/slurmdbd
Xsbin/slurmstepd
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/Slurm_Entity.pdf
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/Slurm_Individual.pdf
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/accounting.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/accounting_storageplugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/acct_gather_energy_plugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/allocation_pies.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/api.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/arch.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/authplugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/big_sys.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/bluegene.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/bull.jpg
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/cgroups.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/checkpoint_blcr.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/checkpoint_plugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/coding_style.pdf
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/configurator.easy.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/configurator.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/cons_res.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/cons_res_share.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/contributor.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/cpu_management.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/cray.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/crypto_plugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/disclaimer.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/dist_plane.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/documentation.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/download.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/elastic_computing.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/entities.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/example_usage.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/faq.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/gang_scheduling.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/gres.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/gres_design.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/gres_plugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/help.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/high_throughput.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/ibm-pe.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/ibm.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/job_exit_code.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/job_launch.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/job_submit_plugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/jobacct_gatherplugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/jobcompplugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/launch_plugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/linuxstyles.css
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/lll.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/mail.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/man_index.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/maui.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/mc_support.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/mc_support.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/meetings.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/moab.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/mpi_guide.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/mpiplugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/multi_cluster.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/news.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/overview.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/plane_ex1.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/plane_ex2.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/plane_ex3.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/plane_ex4.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/plane_ex5.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/plane_ex6.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/plane_ex7.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/platforms.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/plugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/power_save.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/preempt.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/preemption_plugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/priority_multifactor.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/priority_multifactor2.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/priority_plugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/proctrack_plugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/programmer_guide.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/prolog_epilog.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/publications.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/qos.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/quickstart.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/quickstart_admin.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/reservations.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/resource_limits.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/rosetta.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/rosetta.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/schedmd.png
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/schedplugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/select_design.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/selectplugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/slurm.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/slurm_design.pdf
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/slurm_logo.png
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/slurm_ug_agenda.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/slurm_ug_cfp.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/slurm_ug_registration.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/slurmstyles.css
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/sponsors.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/sun_const.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/switchplugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/taskplugins.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/team.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/testimonials.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/topo_ex1.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/topo_ex2.gif
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/topology.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/topology_plugin.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/troubleshoot.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/tutorial_intro_files.tar
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/tutorials.html
X%%PORTDOCS%%%%DOCSDIR%%-2.5.4/html/usage_pies.gif
X%%PORTEXAMPLES%%%%EXAMPLESDIR%%/slurm.conf
X%%PORTEXAMPLES%%@dirrm %%EXAMPLESDIR%%
X%%PORTDOCS%%@dirrm %%DOCSDIR%%-2.5.4/html
X%%PORTDOCS%%@dirrm %%DOCSDIR%%-2.5.4
X@dirrm lib/slurm/src/srun
X@dirrm lib/slurm/src/sattach
X@dirrm lib/slurm/src
X@dirrm lib/slurm
X@dirrm include/slurm
a53945c2e5c91142146a599d60f75628
exit



>Release-Note:
>Audit-Trail:
>Unformatted:



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201304101252.r3ACqPJU021343>