From owner-freebsd-cluster@FreeBSD.ORG Mon Aug 6 04:15:26 2007 Return-Path: Delivered-To: freebsd-cluster@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BE80916A418 for ; Mon, 6 Aug 2007 04:15:26 +0000 (UTC) (envelope-from raysonlogin@gmail.com) Received: from an-out-0708.google.com (an-out-0708.google.com [209.85.132.245]) by mx1.freebsd.org (Postfix) with ESMTP id 7CFDC13C459 for ; Mon, 6 Aug 2007 04:15:26 +0000 (UTC) (envelope-from raysonlogin@gmail.com) Received: by an-out-0708.google.com with SMTP id c14so314496anc for ; Sun, 05 Aug 2007 21:15:25 -0700 (PDT) DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=aHbMpn39hF1exe4589oaBh2y/vS5o9ZVOo7FBzO6W7vFq4H9MHc1vXFqPGrl5DbK0CBdHLsxjYoi9p/Td5IiHl24soxm/aC1HpLsLEPIRv84Id5fy3dq3dVGueIwHRs2himqz6QX/Dp8U4AMqJXes92d6VM14Roymh1D/byJO6I= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=XKTK3Itnf7GTPEw1NFCMjrNJpalfPqqr0KwpHS34eYcflxivhvN6fk2WhjohP7mhklCQ8e4WX0rz8X70rwchLTRa5aGOQDE3DPv2KGKuq7wLFtxWt/ncahHOVCK9XJX5jg/Mvaicp9ELmBcYwaVxnzWXmeTu5dHVXmvBazGIlgc= Received: by 10.100.139.12 with SMTP id m12mr2916645and.1186372242436; Sun, 05 Aug 2007 20:50:42 -0700 (PDT) Received: by 10.100.133.7 with HTTP; Sun, 5 Aug 2007 20:50:42 -0700 (PDT) Message-ID: <73a01bf20708052050j1e1a3a6dp22bdbcb0275eed1d@mail.gmail.com> Date: Sun, 5 Aug 2007 23:50:42 -0400 From: "Rayson Ho" To: "herve lubaki" In-Reply-To: <103693.94307.qm@web86107.mail.ird.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <103693.94307.qm@web86107.mail.ird.yahoo.com> Cc: freebsd-cluster@freebsd.org Subject: Re: freebsd-cluster X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Aug 2007 04:15:26 -0000 What kind of cluster?? The requirements for an HPC cluster and that of a HA cluster are different... Rayson On 8/3/07, herve lubaki wrote: > He! > I' m herve,I study in university of Kinshasa ( D.R. of Congo); option informatic. > I' want to know how to make a basic freedsd-cluster of servers and if someone can give my softwares and doc for cluster for that. > thrank!! > e-mail : hervelubaki2001@yahoo.fr > > > --------------------------------- > Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail > _______________________________________________ > freebsd-cluster@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-cluster > To unsubscribe, send any mail to "freebsd-cluster-unsubscribe@freebsd.org" > From owner-freebsd-cluster@FreeBSD.ORG Mon Aug 6 05:21:15 2007 Return-Path: Delivered-To: freebsd-cluster@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2F5E716A417 for ; Mon, 6 Aug 2007 05:21:15 +0000 (UTC) (envelope-from jarrod@ipglobal.net) Received: from queso.ipglobal.net (smtp.ipglobal.net [65.183.32.23]) by mx1.freebsd.org (Postfix) with ESMTP id D21AB13C459 for ; Mon, 6 Aug 2007 05:21:14 +0000 (UTC) (envelope-from jarrod@ipglobal.net) Received: (qmail 34206 invoked by uid 89); 6 Aug 2007 04:38:48 -0000 Received: from dot.ipglobal.net (HELO reademail.com) (65.183.32.10) by queso.ipglobal.net with SMTP; 6 Aug 2007 04:38:48 -0000 Received: from 98.201.15.111 (SquirrelMail authenticated user jarrod@ipglobal.net) by reademail.com with HTTP; Sun, 5 Aug 2007 23:57:23 -0500 (CDT) Message-ID: <4480.98.201.15.111.1186376243.squirrel@reademail.com> In-Reply-To: <73a01bf20708052050j1e1a3a6dp22bdbcb0275eed1d@mail.gmail.com> References: <103693.94307.qm@web86107.mail.ird.yahoo.com> <73a01bf20708052050j1e1a3a6dp22bdbcb0275eed1d@mail.gmail.com> Date: Sun, 5 Aug 2007 23:57:23 -0500 (CDT) From: jarrod@ipglobal.net To: freebsd-cluster@freebsd.org User-Agent: SquirrelMail/1.4.2 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 Importance: Normal Subject: Re: freebsd-cluster X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Aug 2007 05:21:15 -0000 I am interested to know if a distributed file system exists for freebsd similar to redhat GFS? I am aware of a few that exist, but require integration at the application layer. I desire a fs that client servers running standard server hardware for accessing the same backend information. > What kind of cluster?? The requirements for an HPC cluster and that of > a HA cluster are different... > > Rayson > > > > On 8/3/07, herve lubaki wrote: >> He! >> I' m herve,I study in university of Kinshasa ( D.R. of Congo); option >> informatic. >> I' want to know how to make a basic freedsd-cluster of servers and >> if someone can give my softwares and doc for cluster for that. >> thrank!! >> e-mail : hervelubaki2001@yahoo.fr >> >> >> --------------------------------- >> Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! >> Mail >> _______________________________________________ >> freebsd-cluster@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-cluster >> To unsubscribe, send any mail to >> "freebsd-cluster-unsubscribe@freebsd.org" >> > _______________________________________________ > freebsd-cluster@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-cluster > To unsubscribe, send any mail to "freebsd-cluster-unsubscribe@freebsd.org" > From owner-freebsd-cluster@FreeBSD.ORG Mon Aug 6 08:20:29 2007 Return-Path: Delivered-To: freebsd-cluster@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 90AB316A498 for ; Mon, 6 Aug 2007 08:20:29 +0000 (UTC) (envelope-from johndecot@yahoo.com) Received: from web55404.mail.re4.yahoo.com (web55404.mail.re4.yahoo.com [206.190.58.198]) by mx1.freebsd.org (Postfix) with SMTP id 2E5EE13C474 for ; Mon, 6 Aug 2007 08:20:29 +0000 (UTC) (envelope-from johndecot@yahoo.com) Received: (qmail 37503 invoked by uid 60001); 6 Aug 2007 07:53:47 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:Date:From:Subject:To:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-ID; b=1aLm8Cp+pZYDp3zBiKmdu0CJ0Xbo9es38UaTZ8WVYfjEWXhorMwiaOGCWTgS/dg84N/vofjJITCOe3Wtx1SzJRoazmh6gDTORCFxVbG/TK+P8nec/QiP03fXD1bvJZekKsSB6vRbhPDR9AcJ31XOOSOb3PSrf/DbGLwzR4UaA/o=; X-YMail-OSG: 0EtiF2cVM1kbMjpxo.JG3A6M3y.moL65g3dU0zHrUKG7V8j7bO8UHe3YxOhP84DPai0NT3R00CXHJp0H8LuMheRobB9K9WXP44gjzMUdChT8Q_iaBOXeBlVf4mPE8.OzNlGzg.Ht8raCKeD60Jm.Nb4EO1oTePuVif21HMCn8OILxQc8cs8OKFsktcyJ.fNAgi1I5thS7FPHQweFB8g- Received: from [63.219.2.3] by web55404.mail.re4.yahoo.com via HTTP; Mon, 06 Aug 2007 00:53:46 PDT Date: Mon, 6 Aug 2007 00:53:46 -0700 (PDT) From: john decot To: freebsd-cluster@freebsd.org MIME-Version: 1.0 Message-ID: <962834.37025.qm@web55404.mail.re4.yahoo.com> Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: metric problem X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Aug 2007 08:20:29 -0000 Hi all, As I am new user for clustering, I am trying LAM/MPI with ganglia. I have face a problem while monitoring ganglia with web. i.e can't locate metric for selected cluster. I have telnet 127.0.0.1 8652 and results shows without metric. Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. ]> gmod.conf as follows: /* This configuration is as close to 2.5.x default behavior as possible The values closely match ./gmond/metric.h definitions in 2.5.x */ globals { daemonize = yes setuid = yes user = ganglia debug_level = 0 max_udp_msg_len = 1472 mute = no deaf = no host_dmax = 0 /*secs */ cleanup_threshold = 300 /*secs */ gexec = no } /* If a cluster attribute is specified, then all gmond hosts are wrapped inside * of a tag. If you do not specify a cluster tag, then all will * NOT be wrapped inside of a tag. */ cluster { name = "my cluster" owner = "unspecified" latlong = "unspecified" url = "unspecified" } /* The host section describes attributes of the host, like the location */ host { location = "unspecified" } /* Feel free to specify as many udp_send_channels as you like. Gmond used to only support having a single channel */ udp_send_channel { #mcast_join = 239.2.11.71 port = 8649 } /* You can specify as many udp_recv_channels as you like as well. */ udp_recv_channel { # mcast_join = 239.2.11.71 port = 8649 # bind = 239.2.11.71 } /* You can specify as many tcp_accept_channels as you like to share an xml description of the state of the cluster */ tcp_accept_channel { port = 8649 } /* The old internal 2.5.x metric array has been replaced by the following collection_group directives. What follows is the default behavior for collecting and sending metrics that is as close to 2.5.x behavior as possible. */ /* This collection group will cause a heartbeat (or beacon) to be sent every 20 seconds. In the heartbeat is the GMOND_STARTED data which expresses the age of the running gmond. */ collection_group { collect_once = yes time_threshold = 20 metric { name = "heartbeat" } } /* This collection group will send general info about this host every 1200 secs. This information doesn't change between reboots and is only collected once. */ collection_group { collect_once = yes time_threshold = 1200 metric { name = "cpu_num" } metric { name = "cpu_speed" } metric { name = "mem_total" } /* Should this be here? Swap can be added/removed between reboots. */ metric { name = "swap_total" } metric { name = "boottime" } metric { name = "machine_type" } metric { name = "os_name" } metric { name = "os_release" } metric { name = "location" } } /* This collection group will send the status of gexecd for this host every 300 secs */ /* Unlike 2.5.x the default behavior is to report gexecd OFF. */ collection_group { collect_once = yes time_threshold = 300 metric { name = "gexec" } } /* This collection group will collect the CPU status info every 20 secs. The time threshold is set to 90 seconds. In honesty, this time_threshold could be set significantly higher to reduce unneccessary network chatter. */ collection_group { collect_every = 20 time_threshold = 90 /* CPU status */ metric { name = "cpu_user" value_threshold = "1.0" } metric { name = "cpu_system" value_threshold = "1.0" } metric { name = "cpu_idle" value_threshold = "5.0" } metric { name = "cpu_nice" value_threshold = "1.0" } metric { name = "cpu_aidle" value_threshold = "5.0" } metric { name = "cpu_wio" value_threshold = "1.0" } /* The next two metrics are optional if you want more detail... ... since they are accounted for in cpu_system. metric { name = "cpu_intr" value_threshold = "1.0" } metric { name = "cpu_sintr" value_threshold = "1.0" } */ } collection_group { collect_every = 20 time_threshold = 90 /* Load Averages */ metric { name = "load_one" value_threshold = "1.0" } metric { name = "load_five" value_threshold = "1.0" } metric { name = "load_fifteen" value_threshold = "1.0" } } /* This group collects the number of running and total processes */ collection_group { collect_every = 80 time_threshold = 950 metric { name = "proc_run" value_threshold = "1.0" } metric { name = "proc_total" value_threshold = "1.0" } } /* This collection group grabs the volatile memory metrics every 40 secs and sends them at least every 180 secs. This time_threshold can be increased significantly to reduce unneeded network traffic. */ collection_group { collect_every = 40 time_threshold = 180 metric { name = "mem_free" value_threshold = "1024.0" } metric { name = "mem_shared" value_threshold = "1024.0" } metric { name = "mem_buffers" value_threshold = "1024.0" } metric { name = "mem_cached" value_threshold = "1024.0" } metric { name = "swap_free" value_threshold = "1024.0" } } collection_group { collect_every = 40 time_threshold = 300 metric { name = "bytes_out" value_threshold = 4096 } metric { name = "bytes_in" value_threshold = 4096 } metric { name = "pkts_in" value_threshold = 256 } metric { name = "pkts_out" value_threshold = 256 } } /* Different than 2.5.x default since the old config made no sense */ collection_group { collect_every = 1800 time_threshold = 3600 metric { name = "disk_total" value_threshold = 1.0 } } collection_group { collect_every = 40 time_threshold = 180 metric { name = "disk_free" value_threshold = 1.0 } metric { name = "part_max_used" value_threshold = 1.0 } } gmetad.conf as follows : # This is an example of a Ganglia Meta Daemon configuration file # http://ganglia.sourceforge.net/ # # $Id: gmetad.conf,v 1.17 2005/03/15 18:15:05 massie Exp $ # #------------------------------------------------------------------------------- # Setting the debug_level to 1 will keep daemon in the forground and # show only error messages. Setting this value higher than 1 will make # gmetad output debugging information and stay in the foreground. # default: 0 # debug_level 10 # #------------------------------------------------------------------------------- # What to monitor. The most important section of this file. # # The data_source tag specifies either a cluster or a grid to # monitor. If we detect the source is a cluster, we will maintain a complete # set of RRD databases for it, which can be used to create historical # graphs of the metrics. If the source is a grid (it comes from another gmetad), # we will only maintain summary RRDs for it. # # Format: # data_source "my cluster" [polling interval] address1:port addreses2:port ... # # The keyword 'data_source' must immediately be followed by a unique # string which identifies the source, then an optional polling interval in # seconds. The source will be polled at this interval on average. # If the polling interval is omitted, 15sec is asssumed. # # A list of machines which service the data source follows, in the # format ip:port, or name:port. If a port is not specified then 8649 # (the default gmond port) is assumed. # default: There is no default value # # data_source "my cluster" 10 localhost my.machine.edu:8649 1.2.3.5:8655 # data_source "my grid" 50 1.3.4.7:8655 grid.org:8651 grid-backup.org:8651 # data_source "another source" 1.3.4.7:8655 1.3.4.8 data_source "my cluster" 10 localhost # # Round-Robin Archives # You can specify custom Round-Robin archives here (defaults are listed below) # # RRAs "RRA:AVERAGE:0.5:1:240" "RRA:AVERAGE:0.5:24:240" "RRA:AVERAGE:0.5:168:240" "RRA:AVERAGE:0.5:672:240" \ # "RRA:AVERAGE:0.5:5760:370" # # #------------------------------------------------------------------------------- # Scalability mode. If on, we summarize over downstream grids, and respect # authority tags. If off, we take on 2.5.0-era behavior: we do not wrap our output # in tags, we ignore all tags we see, and always assume # we are the "authority" on data source feeds. This approach does not scale to # large groups of clusters, but is provided for backwards compatibility. # default: on # scalable off # #------------------------------------------------------------------------------- # The name of this Grid. All the data sources above will be wrapped in a GRID # tag with this name. # default: Unspecified # gridname "MyGrid" # #------------------------------------------------------------------------------- # The authority URL for this grid. Used by other gmetads to locate graphs # for our data sources. Generally points to a ganglia/ # website on this machine. # default: "http://hostname/ganglia/", # where hostname is the name of this machine, as defined by gethostname(). # authority "http://mycluster.org/newprefix/" # #------------------------------------------------------------------------------- # List of machines this gmetad will share XML with. Localhost # is always trusted. # default: There is no default value # trusted_hosts 127.0.0.1 169.229.50.165 my.gmetad.org # #------------------------------------------------------------------------------- # If you want any host which connects to the gmetad XML to receive # data, then set this value to "on" # default: off # all_trusted on # #------------------------------------------------------------------------------- # If you don't want gmetad to setuid then set this to off # default: on # setuid off # #------------------------------------------------------------------------------- # User gmetad will setuid to (defaults to "ganglia") # default: "ganglia" # setuid_username "ganglia" # #------------------------------------------------------------------------------- # The port gmetad will answer requests for XML # default: 8651 # xml_port 8651 # #------------------------------------------------------------------------------- # The port gmetad will answer queries for XML. This facility allows # simple subtree and summation views of the XML tree. # default: 8652 # interactive_port 8652 # #------------------------------------------------------------------------------- # The number of threads answering XML requests # default: 4 # server_threads 10 # #------------------------------------------------------------------------------- # Where gmetad stores its round-robin databases # default: "/var/db/ganglia/rrds" # rrd_rootdir "/some/other/place" so, could some help me to set metric Regards, John --------------------------------- Park yourself in front of a world of choices in alternative vehicles. Visit the Yahoo! Auto Green Center. From owner-freebsd-cluster@FreeBSD.ORG Mon Aug 6 09:14:52 2007 Return-Path: Delivered-To: freebsd-cluster@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B691D16A417 for ; Mon, 6 Aug 2007 09:14:52 +0000 (UTC) (envelope-from johndecot@yahoo.com) Received: from web55407.mail.re4.yahoo.com (web55407.mail.re4.yahoo.com [206.190.58.201]) by mx1.freebsd.org (Postfix) with SMTP id 5344913C45D for ; Mon, 6 Aug 2007 09:14:52 +0000 (UTC) (envelope-from johndecot@yahoo.com) Received: (qmail 57271 invoked by uid 60001); 6 Aug 2007 09:14:51 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:Date:From:Subject:To:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-ID; b=QZ9cizufZe4znEYNaAs2aPAQA3U+t7m3ijFatexpiDO0ls4uB5Lg1iF20x36S5PuralQI2eUnoNkZyD+o3TjRnBhtvdcQ0/Ls1iKdpUGTNu9qTi9IvJ1jT5HNwF7tZXvPu68I6WEcrEh0DeBze0ohZgXG2N8/R6bhnrUwjS02Sw=; X-YMail-OSG: AvAABh0VM1k3uiO9KA4w7o4yplNaX0iLGCIPbI3afqaOHqLWYYB00JpbedvlbB6bNg-- Received: from [63.219.2.3] by web55407.mail.re4.yahoo.com via HTTP; Mon, 06 Aug 2007 02:14:51 PDT Date: Mon, 6 Aug 2007 02:14:51 -0700 (PDT) From: john decot To: freebsd-cluster@freebsd.org MIME-Version: 1.0 Message-ID: <471603.56671.qm@web55407.mail.re4.yahoo.com> Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: metric problem X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Aug 2007 09:14:52 -0000 Hi all, As I am new user for clustering, I am trying LAM/MPI with ganglia. I have face a problem while monitoring ganglia with web. i.e can't locate metric for selected cluster. I have telnet 127.0.0.1 8652 and results shows without metric. Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. ]> gmod.conf as follows: /* This configuration is as close to 2.5.x default behavior as possible The values closely match ./gmond/metric.h definitions in 2.5.x */ globals { daemonize = yes setuid = yes user = ganglia debug_level = 0 max_udp_msg_len = 1472 mute = no deaf = no host_dmax = 0 /*secs */ cleanup_threshold = 300 /*secs */ gexec = no } /* If a cluster attribute is specified, then all gmond hosts are wrapped inside * of a tag. If you do not specify a cluster tag, then all will * NOT be wrapped inside of a tag. */ cluster { name = "my cluster" owner = "unspecified" latlong = "unspecified" url = "unspecified" } /* The host section describes attributes of the host, like the location */ host { location = "unspecified" } /* Feel free to specify as many udp_send_channels as you like. Gmond used to only support having a single channel */ udp_send_channel { #mcast_join = 239.2.11.71 port = 8649 } /* You can specify as many udp_recv_channels as you like as well. */ udp_recv_channel { # mcast_join = 239.2.11.71 port = 8649 # bind = 239.2.11.71 } /* You can specify as many tcp_accept_channels as you like to share an xml description of the state of the cluster */ tcp_accept_channel { port = 8649 } /* The old internal 2.5.x metric array has been replaced by the following collection_group directives. What follows is the default behavior for collecting and sending metrics that is as close to 2.5.x behavior as possible. */ /* This collection group will cause a heartbeat (or beacon) to be sent every 20 seconds. In the heartbeat is the GMOND_STARTED data which expresses the age of the running gmond. */ collection_group { collect_once = yes time_threshold = 20 metric { name = "heartbeat" } } /* This collection group will send general info about this host every 1200 secs. This information doesn't change between reboots and is only collected once. */ collection_group { collect_once = yes time_threshold = 1200 metric { name = "cpu_num" } metric { name = "cpu_speed" } metric { name = "mem_total" } /* Should this be here? Swap can be added/removed between reboots. */ metric { name = "swap_total" } metric { name = "boottime" } metric { name = "machine_type" } metric { name = "os_name" } metric { name = "os_release" } metric { name = "location" } } /* This collection group will send the status of gexecd for this host every 300 secs */ /* Unlike 2.5.x the default behavior is to report gexecd OFF. */ collection_group { collect_once = yes time_threshold = 300 metric { name = "gexec" } } /* This collection group will collect the CPU status info every 20 secs. The time threshold is set to 90 seconds. In honesty, this time_threshold could be set significantly higher to reduce unneccessary network chatter. */ collection_group { collect_every = 20 time_threshold = 90 /* CPU status */ metric { name = "cpu_user" value_threshold = "1.0" } metric { name = "cpu_system" value_threshold = "1.0" } metric { name = "cpu_idle" value_threshold = "5.0" } metric { name = "cpu_nice" value_threshold = "1.0" } metric { name = "cpu_aidle" value_threshold = "5.0" } metric { name = "cpu_wio" value_threshold = "1.0" } /* The next two metrics are optional if you want more detail... ... since they are accounted for in cpu_system. metric { name = "cpu_intr" value_threshold = "1.0" } metric { name = "cpu_sintr" value_threshold = "1.0" } */ } collection_group { collect_every = 20 time_threshold = 90 /* Load Averages */ metric { name = "load_one" value_threshold = "1.0" } metric { name = "load_five" value_threshold = "1.0" } metric { name = "load_fifteen" value_threshold = "1.0" } } /* This group collects the number of running and total processes */ collection_group { collect_every = 80 time_threshold = 950 metric { name = "proc_run" value_threshold = "1.0" } metric { name = "proc_total" value_threshold = "1.0" } } /* This collection group grabs the volatile memory metrics every 40 secs and sends them at least every 180 secs. This time_threshold can be increased significantly to reduce unneeded network traffic. */ collection_group { collect_every = 40 time_threshold = 180 metric { name = "mem_free" value_threshold = "1024.0" } metric { name = "mem_shared" value_threshold = "1024.0" } metric { name = "mem_buffers" value_threshold = "1024.0" } metric { name = "mem_cached" value_threshold = "1024.0" } metric { name = "swap_free" value_threshold = "1024.0" } } collection_group { collect_every = 40 time_threshold = 300 metric { name = "bytes_out" value_threshold = 4096 } metric { name = "bytes_in" value_threshold = 4096 } metric { name = "pkts_in" value_threshold = 256 } metric { name = "pkts_out" value_threshold = 256 } } /* Different than 2.5.x default since the old config made no sense */ collection_group { collect_every = 1800 time_threshold = 3600 metric { name = "disk_total" value_threshold = 1.0 } } collection_group { collect_every = 40 time_threshold = 180 metric { name = "disk_free" value_threshold = 1.0 } metric { name = "part_max_used" value_threshold = 1.0 } } gmetad.conf as follows : # This is an example of a Ganglia Meta Daemon configuration file # http://ganglia.sourceforge.net/ # # $Id: gmetad.conf,v 1.17 2005/03/15 18:15:05 massie Exp $ # #------------------------------------------------------------------------------- # Setting the debug_level to 1 will keep daemon in the forground and # show only error messages. Setting this value higher than 1 will make # gmetad output debugging information and stay in the foreground. # default: 0 # debug_level 10 # #------------------------------------------------------------------------------- # What to monitor. The most important section of this file. # # The data_source tag specifies either a cluster or a grid to # monitor. If we detect the source is a cluster, we will maintain a complete # set of RRD databases for it, which can be used to create historical # graphs of the metrics. If the source is a grid (it comes from another gmetad), # we will only maintain summary RRDs for it. # # Format: # data_source "my cluster" [polling interval] address1:port addreses2:port ... # # The keyword 'data_source' must immediately be followed by a unique # string which identifies the source, then an optional polling interval in # seconds. The source will be polled at this interval on average. # If the polling interval is omitted, 15sec is asssumed. # # A list of machines which service the data source follows, in the # format ip:port, or name:port. If a port is not specified then 8649 # (the default gmond port) is assumed. # default: There is no default value # # data_source "my cluster" 10 localhost my.machine.edu:8649 1.2.3.5:8655 # data_source "my grid" 50 1.3.4.7:8655 grid.org:8651 grid-backup.org:8651 # data_source "another source" 1.3.4.7:8655 1.3.4.8 data_source "my cluster" 10 localhost # # Round-Robin Archives # You can specify custom Round-Robin archives here (defaults are listed below) # # RRAs "RRA:AVERAGE:0.5:1:240" "RRA:AVERAGE:0.5:24:240" "RRA:AVERAGE:0.5:168:240" "RRA:AVERAGE:0.5:672:240" \ # "RRA:AVERAGE:0.5:5760:370" # # #------------------------------------------------------------------------------- # Scalability mode. If on, we summarize over downstream grids, and respect # authority tags. If off, we take on 2.5.0-era behavior: we do not wrap our output # in tags, we ignore all tags we see, and always assume # we are the "authority" on data source feeds. This approach does not scale to # large groups of clusters, but is provided for backwards compatibility. # default: on # scalable off # #------------------------------------------------------------------------------- # The name of this Grid. All the data sources above will be wrapped in a GRID # tag with this name. # default: Unspecified # gridname "MyGrid" # #------------------------------------------------------------------------------- # The authority URL for this grid. Used by other gmetads to locate graphs # for our data sources. Generally points to a ganglia/ # website on this machine. # default: "http://hostname/ganglia/", # where hostname is the name of this machine, as defined by gethostname(). # authority "http://mycluster.org/newprefix/" # #------------------------------------------------------------------------------- # List of machines this gmetad will share XML with. Localhost # is always trusted. # default: There is no default value # trusted_hosts 127.0.0.1 169.229.50.165 my.gmetad.org # #------------------------------------------------------------------------------- # If you want any host which connects to the gmetad XML to receive # data, then set this value to "on" # default: off # all_trusted on # #------------------------------------------------------------------------------- # If you don't want gmetad to setuid then set this to off # default: on # setuid off # #------------------------------------------------------------------------------- # User gmetad will setuid to (defaults to "ganglia") # default: "ganglia" # setuid_username "ganglia" # #------------------------------------------------------------------------------- # The port gmetad will answer requests for XML # default: 8651 # xml_port 8651 # #------------------------------------------------------------------------------- # The port gmetad will answer queries for XML. This facility allows # simple subtree and summation views of the XML tree. # default: 8652 # interactive_port 8652 # #------------------------------------------------------------------------------- # The number of threads answering XML requests # default: 4 # server_threads 10 # #------------------------------------------------------------------------------- # Where gmetad stores its round-robin databases # default: "/var/db/ganglia/rrds" # rrd_rootdir "/some/other/place" so, could some help me to set metric Regards, John --------------------------------- Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, when. From owner-freebsd-cluster@FreeBSD.ORG Mon Aug 6 11:47:53 2007 Return-Path: Delivered-To: freebsd-cluster@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 88B8916A417 for ; Mon, 6 Aug 2007 11:47:53 +0000 (UTC) (envelope-from anderson@freebsd.org) Received: from ns.trinitel.com (186.161.36.72.static.reverse.ltdomains.com [72.36.161.186]) by mx1.freebsd.org (Postfix) with ESMTP id 6145F13C46A for ; Mon, 6 Aug 2007 11:47:53 +0000 (UTC) (envelope-from anderson@freebsd.org) Received: from neutrino.vnode.org (r74-193-81-203.pfvlcmta01.grtntx.tl.dh.suddenlink.net [74.193.81.203]) (authenticated bits=0) by ns.trinitel.com (8.14.1/8.14.1) with ESMTP id l76BU31W089156 (version=TLSv1/SSLv3 cipher=DHE-DSS-AES256-SHA bits=256 verify=NO); Mon, 6 Aug 2007 06:30:03 -0500 (CDT) (envelope-from anderson@freebsd.org) Message-ID: <46B70636.9000807@freebsd.org> Date: Mon, 06 Aug 2007 06:29:58 -0500 From: Eric Anderson User-Agent: Thunderbird 2.0.0.5 (X11/20070726) MIME-Version: 1.0 To: jarrod@ipglobal.net References: <103693.94307.qm@web86107.mail.ird.yahoo.com> <73a01bf20708052050j1e1a3a6dp22bdbcb0275eed1d@mail.gmail.com> <4480.98.201.15.111.1186376243.squirrel@reademail.com> In-Reply-To: <4480.98.201.15.111.1186376243.squirrel@reademail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-1.4 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.1.8 X-Spam-Checker-Version: SpamAssassin 3.1.8 (2007-02-13) on ns.trinitel.com Cc: freebsd-cluster@freebsd.org Subject: Re: freebsd-cluster X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Aug 2007 11:47:53 -0000 On 08/05/07 23:57, jarrod@ipglobal.net wrote: > I am interested to know if a distributed file system exists for freebsd > similar to redhat GFS? I am aware of a few that exist, but require > integration at the application layer. I desire a fs that client servers > running standard server hardware for accessing the same backend > information. There are no clustered file systems for FreeBSD, and only Coda comes close, but probably not what you want. Maybe NFS can work for you, but more than likely not. Eric >> What kind of cluster?? The requirements for an HPC cluster and that of >> a HA cluster are different... >> >> Rayson >> >> >> >> On 8/3/07, herve lubaki wrote: >>> He! >>> I' m herve,I study in university of Kinshasa ( D.R. of Congo); option >>> informatic. >>> I' want to know how to make a basic freedsd-cluster of servers and >>> if someone can give my softwares and doc for cluster for that. >>> thrank!! >>> e-mail : hervelubaki2001@yahoo.fr >>> >>> >>> --------------------------------- >>> Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! >>> Mail >>> _______________________________________________ >>> freebsd-cluster@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-cluster >>> To unsubscribe, send any mail to >>> "freebsd-cluster-unsubscribe@freebsd.org" >>> >> _______________________________________________ >> freebsd-cluster@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-cluster >> To unsubscribe, send any mail to "freebsd-cluster-unsubscribe@freebsd.org" >> > > _______________________________________________ > freebsd-cluster@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-cluster > To unsubscribe, send any mail to "freebsd-cluster-unsubscribe@freebsd.org" From owner-freebsd-cluster@FreeBSD.ORG Mon Aug 6 15:19:11 2007 Return-Path: Delivered-To: freebsd-cluster@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3AF5116A469 for ; Mon, 6 Aug 2007 15:19:11 +0000 (UTC) (envelope-from jarrod@ipglobal.net) Received: from lily.humbled.org (humbled.org [69.31.131.138]) by mx1.freebsd.org (Postfix) with ESMTP id 0280713C468 for ; Mon, 6 Aug 2007 15:19:10 +0000 (UTC) (envelope-from jarrod@ipglobal.net) Received: (qmail 26377 invoked by uid 75); 6 Aug 2007 09:52:29 -0500 Received: from node48-2.ipglobal.net (HELO lily) (jarrod@humbled.org@65.183.48.2) by lily.humbled.org with AES128-SHA encrypted SMTP; 6 Aug 2007 09:52:29 -0500 From: "Jarrod Baumann" To: References: <103693.94307.qm@web86107.mail.ird.yahoo.com> <73a01bf20708052050j1e1a3a6dp22bdbcb0275eed1d@mail.gmail.com> <4480.98.201.15.111.1186376243.squirrel@reademail.com> <46B70636.9000807@freebsd.org> In-Reply-To: <46B70636.9000807@freebsd.org> Date: Mon, 6 Aug 2007 09:52:28 -0500 Organization: IP Global.Net Message-ID: <001801c7d839$697f14a0$3c7d3de0$@net> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook 12.0 thread-index: AcfYHRpJc65mQI6AStG7MICcmgg3jwAHC6fg Content-Language: en-us Subject: RE: freebsd-cluster X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: jarrod@ipglobal.net List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Aug 2007 15:19:11 -0000 How about and drivers, experimental/beta or not, that exist for mounting other existing clustered DFS environments? I have come up empty on Google :( -----Original Message----- From: Eric Anderson [mailto:anderson@freebsd.org] Sent: Monday, August 06, 2007 6:30 AM To: jarrod@ipglobal.net Cc: freebsd-cluster@freebsd.org Subject: Re: freebsd-cluster On 08/05/07 23:57, jarrod@ipglobal.net wrote: > I am interested to know if a distributed file system exists for freebsd > similar to redhat GFS? I am aware of a few that exist, but require > integration at the application layer. I desire a fs that client servers > running standard server hardware for accessing the same backend > information. There are no clustered file systems for FreeBSD, and only Coda comes close, but probably not what you want. Maybe NFS can work for you, but more than likely not. Eric >> What kind of cluster?? The requirements for an HPC cluster and that of >> a HA cluster are different... >> >> Rayson >> >> >> >> On 8/3/07, herve lubaki wrote: >>> He! >>> I' m herve,I study in university of Kinshasa ( D.R. of Congo); option >>> informatic. >>> I' want to know how to make a basic freedsd-cluster of servers and >>> if someone can give my softwares and doc for cluster for that. >>> thrank!! >>> e-mail : hervelubaki2001@yahoo.fr >>> >>> >>> --------------------------------- >>> Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! >>> Mail >>> _______________________________________________ >>> freebsd-cluster@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-cluster >>> To unsubscribe, send any mail to >>> "freebsd-cluster-unsubscribe@freebsd.org" >>> >> _______________________________________________ >> freebsd-cluster@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-cluster >> To unsubscribe, send any mail to "freebsd-cluster-unsubscribe@freebsd.org" >> > > _______________________________________________ > freebsd-cluster@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-cluster > To unsubscribe, send any mail to "freebsd-cluster-unsubscribe@freebsd.org" From owner-freebsd-cluster@FreeBSD.ORG Mon Aug 6 15:55:38 2007 Return-Path: Delivered-To: freebsd-cluster@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 39D7A16A418 for ; Mon, 6 Aug 2007 15:55:38 +0000 (UTC) (envelope-from olli@lurza.secnetix.de) Received: from lurza.secnetix.de (lurza.secnetix.de [IPv6:2001:1b20:1:3::1]) by mx1.freebsd.org (Postfix) with ESMTP id 9F26713C46C for ; Mon, 6 Aug 2007 15:55:37 +0000 (UTC) (envelope-from olli@lurza.secnetix.de) Received: from lurza.secnetix.de (ytytml@localhost [127.0.0.1]) by lurza.secnetix.de (8.13.4/8.13.4) with ESMTP id l76FtUgh043816; Mon, 6 Aug 2007 17:55:35 +0200 (CEST) (envelope-from oliver.fromme@secnetix.de) Received: (from olli@localhost) by lurza.secnetix.de (8.13.4/8.13.1/Submit) id l76FtUo2043815; Mon, 6 Aug 2007 17:55:30 +0200 (CEST) (envelope-from olli) Date: Mon, 6 Aug 2007 17:55:30 +0200 (CEST) Message-Id: <200708061555.l76FtUo2043815@lurza.secnetix.de> From: Oliver Fromme To: freebsd-cluster@FreeBSD.ORG, jarrod@ipglobal.net In-Reply-To: <001801c7d839$697f14a0$3c7d3de0$@net> X-Newsgroups: list.freebsd-cluster User-Agent: tin/1.8.2-20060425 ("Shillay") (UNIX) (FreeBSD/4.11-STABLE (i386)) MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-2.1.2 (lurza.secnetix.de [127.0.0.1]); Mon, 06 Aug 2007 17:55:36 +0200 (CEST) Cc: Subject: Re: freebsd-cluster X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: freebsd-cluster@FreeBSD.ORG, jarrod@ipglobal.net List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Aug 2007 15:55:38 -0000 Jarrod Baumann wrote: > How about and drivers, experimental/beta or not, that exist for > mounting other existing clustered DFS environments? Well, there's FUSE in FreeBSD's ports collection, so you should be able to mount any file system for which there is a FUSE driver available. In theory. FUSE provides an API that allows file system drivers to run in userland. There's a ton of FUSE file systems on its home page ... Most of them are only written and tested on Linux, but it should be possible to compile and run them on FreeBSD with minor efforts.) Other than that, there is no distributed file system for FreeBSD. Coda and AFS have been mentioned, but last time I looked they were not really usable. When a customer asks me to set up clustered storage (HA), I usually recommend a NetApp Filer cluster. At least they use a BSD-derived OS, so it's not completely off-topic. ;-) Best regards Oliver -- Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M. Handelsregister: Registergericht Muenchen, HRA 74606, Geschäftsfuehrung: secnetix Verwaltungsgesellsch. mbH, Handelsregister: Registergericht Mün- chen, HRB 125758, Geschäftsführer: Maik Bachmann, Olaf Erb, Ralf Gebhart FreeBSD-Dienstleistungen, -Produkte und mehr: http://www.secnetix.de/bsd C++: "an octopus made by nailing extra legs onto a dog" -- Steve Taylor, 1998 From owner-freebsd-cluster@FreeBSD.ORG Tue Aug 7 19:46:37 2007 Return-Path: Delivered-To: freebsd-cluster@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BE09F16A417 for ; Tue, 7 Aug 2007 19:46:37 +0000 (UTC) (envelope-from anderson@freebsd.org) Received: from ns.trinitel.com (186.161.36.72.static.reverse.ltdomains.com [72.36.161.186]) by mx1.freebsd.org (Postfix) with ESMTP id 9A7CC13C4B3 for ; Tue, 7 Aug 2007 19:46:37 +0000 (UTC) (envelope-from anderson@freebsd.org) Received: from 165.61.0.10.in-addr.arpa (72-254-39-241.client.stsn.net [72.254.39.241]) (authenticated bits=0) by ns.trinitel.com (8.14.1/8.14.1) with ESMTP id l77JkaXk034573; Tue, 7 Aug 2007 14:46:36 -0500 (CDT) (envelope-from anderson@freebsd.org) Message-ID: <46B8CC16.9070309@freebsd.org> Date: Tue, 07 Aug 2007 14:46:30 -0500 From: Eric Anderson User-Agent: Thunderbird 2.0.0.6 (Macintosh/20070728) MIME-Version: 1.0 To: freebsd-cluster@freebsd.org, jarrod@ipglobal.net References: <200708061555.l76FtUo2043815@lurza.secnetix.de> In-Reply-To: <200708061555.l76FtUo2043815@lurza.secnetix.de> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=1.1 required=5.0 tests=BAYES_00, HELO_DYNAMIC_SPLIT_IP, RCVD_NUMERIC_HELO autolearn=no version=3.1.8 X-Spam-Level: * X-Spam-Checker-Version: SpamAssassin 3.1.8 (2007-02-13) on ns.trinitel.com Cc: Subject: Re: freebsd-cluster X-BeenThere: freebsd-cluster@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Clustering FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Aug 2007 19:46:37 -0000 Oliver Fromme wrote: > Jarrod Baumann wrote: > > How about and drivers, experimental/beta or not, that exist for > > mounting other existing clustered DFS environments? > > Well, there's FUSE in FreeBSD's ports collection, so you > should be able to mount any file system for which there is > a FUSE driver available. In theory. > > FUSE provides an API that allows file system drivers to > run in userland. There's a ton of FUSE file systems on > its home page ... Most of them are only written and tested > on Linux, but it should be possible to compile and run them > on FreeBSD with minor efforts.) > > Other than that, there is no distributed file system for > FreeBSD. Coda and AFS have been mentioned, but last time > I looked they were not really usable. > > When a customer asks me to set up clustered storage (HA), > I usually recommend a NetApp Filer cluster. At least they > use a BSD-derived OS, so it's not completely off-topic. ;-) > > Best regards > Oliver > Oh, and I forgot to mention this piece of software: http://www.bsdcluster.com/ I haven't played with it recently, but it was promising when I did play with it about a year ago. It's not a clustered file system implementation, but it does have some failover/HA pieces in it, and some VFS hooks to distribute changes. It's worth a look for sure.. Eric