<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="https://argo-doc.ictp.it/search_rss">
  <title>ARGO HPC Documentation</title>
  <link>https://argo-doc.ictp.it</link>

  <description>
    
            These are the search results for the query, showing results 1 to 15.
        
  </description>

  

  

  <image rdf:resource="https://argo-doc.ictp.it/logo.png"/>

  <items>
    <rdf:Seq>
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/list-of-available-queues"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/software-overview"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/storage-overview"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/how-to-use-the-queue-manager"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/using-201cmodule201d-command"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/sharebox/viewgit"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/sharebox/test"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/sharebox/admin"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/sharebox/news"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/sharebox/news/aggregator"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/sharebox/events"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/sharebox/events/aggregator"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/sharebox/Members"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/sharebox/sparklesharebiarrow.png"/>
      
      
        <rdf:li rdf:resource="https://argo-doc.ictp.it/sharebox/front-page"/>
      
    </rdf:Seq>
  </items>

</channel>


  <item rdf:about="https://argo-doc.ictp.it/list-of-available-queues">
    <title>Argo Overview, Table of available queues/partitions</title>
    <link>https://argo-doc.ictp.it/list-of-available-queues</link>
    <description>Argo overview of available hardware and Table of available queues/partitions are provided.</description>
    <content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<h2>Argo overview</h2>
<p>Argo is ICTP HPC cluster, comprising of 112 hosts/nodes, with total count of 2100 CPUs, nearly 17 TB of memory, 100Gbps  Omnipath or Infiniband interconnects, 1Gbps network and several houndreds of TB of dedicated NFS storage.</p>
<p>The available worker/compute nodes are organised in queues(partitions).</p>
<p>There are three more special cluster nodes: a <strong>master</strong> node that controls job execution and  <strong>login nodes</strong> argo-login1 and argo-login2, where users login, submit jobs, and compile code.</p>
<p> </p>
<p>Jobs can be submitted from argo (argo-login2), argo-login1 and argo-login2.</p>
<pre>ssh argo.ictp.it</pre>
<p>or</p>
<pre>ssh argo-login1.ictp.it</pre>
<p> </p>
<h2>List of available queues/partitions</h2>
<p>Queue infomation can be listed with the sinfo command:</p>
<pre id="content">$  sinfo -s
PARTITION AVAIL  TIMELIMIT   NODES(A/I/O/T)  NODELIST
cmsp         up 1-00:00:00        30/6/4/40  node[01-16,161-184]
esp          up 1-00:00:00       16/12/8/36  node[21-56]
long*        up 1-00:00:00       12/13/7/32  node[61-92]
gpu          up 1-00:00:00          0/1/1/2  gpu[01-02]
serial       up 7-00:00:00          0/2/0/2  serial[01-02]
testing      up    6:00:00      42/19/11/72  node[01-16,61-92,161-184]
esp_guest    up 1-00:00:00          2/0/0/2  node[28-29]</pre>
<p> </p>
<p>The principal queue for all users is the <strong>long</strong> queue, with 32 nodes and 24 hours  time limit. It is the default queue, if none is specified in the job.</p>
<p>Dedicated queues <strong>cmsp</strong>, <strong>esp</strong>, <strong>esp_guest</strong> and <strong>gpu</strong> are available to specific Argo users, upon authorization.</p>
<p><strong>testing</strong> queue is special, comprising of several nodes from both long and cmsp queues, but with a short time limit of 6h.</p>
<p><strong>serial</strong> queue is specific for serial jobs, while it  has a very long time limit of 7 days. Two nodes are in the serial queue.</p>
<p>Generally for all queues the nodes are NOT shared among jobs. The exceptions are queues serial and gpu.</p>
<p> </p>
<p> </p>
<h2>Node features</h2>
<p>Overall, Argo is a heterogeneous cluster, with nodes belonging to  various generations of Intel CPU microarchitectures. Numerous are nodes of the<i> broadwell</i><span> and </span><i>skylake-cascade </i><span>architecture, followed by </span><i>Cascade-Lake</i><span>. Memory size also varies.</span></p>
<p>For each node we list it's microarchitecture, memory size,  and other  features in sinfo output:</p>
<pre>$ sinfo -N -o "%.20N %.15C  %.15P   %.40b"
            NODELIST   CPUS(A/I/O/T)        PARTITION                            ACTIVE_FEATURES
               gpu01       0/40/0/40              gpu               128gb,broadwell-ep,e5-2640v4
               gpu02       0/0/16/16              gpu                32gb,sandybridge-ep,e5-2665
<br />              node01       40/0/0/40             cmsp      omnipart,128gb,broadwell-ep,e5-2640v4
              node02       40/0/0/40             cmsp      omnipart,128gb,broadwell-ep,e5-2640v4
              ...
              node15       40/0/0/40             cmsp      omnipart,128gb,broadwell-ep,e5-2640v4
              node16       40/0/0/40             cmsp      omnipart,128gb,broadwell-ep,e5-2640v4
              <br /><br /><span>	</span>      node21       40/0/0/40              esp      omnipart,128gb,skylake-cascade,silver
              node22       40/0/0/40              esp      omnipart,128gb,skylake-cascade,silver<br />              ...
              node56       0/40/0/40              esp      omnipart,128gb,skylake-cascade,silver
<br />              node61       64/0/0/64            long*   infiniband,187gb,Cascade-Lake,silver-421
              node62       0/0/64/64            long*   infiniband,187gb,Cascade-Lake,silver-421
              ...<br />              node91       64/0/0/64            long*   infiniband,187gb,Cascade-Lake,silver-421
              node92       64/0/0/64            long*   infiniband,187gb,Cascade-Lake,silver-421
             <br />             node161       0/0/40/40             cmsp        omnipart,192,broadwell-ep,e5-2640v4
             node162       0/40/0/40             cmsp        omnipart,192,broadwell-ep,e5-2640v4
             ...<br />             node183       40/0/0/40             cmsp        omnipart,192,broadwell-ep,e5-2640v4
             node184       0/40/0/40             cmsp        omnipart,192,broadwell-ep,e5-2640v4
            <br />            serial01       0/16/0/16           serial                32gb,sandybridge-ep,e5-2650
            serial02       0/16/0/16           serial                32gb,sandybridge-ep,e5-2650</pre>
<div></div>
<p>Within each queue, nodes are homogeneous in terms of all of their features. .</p>
<p>All nodes are networked together with 1 Gbps ethernet links spanning multiple switches.</p>
<p>Access to storage is also done through the Gigabit ethernet network.</p>
<p>Nodes within each queue are also networked together in a   low-latency fabric  for MPI communication, thanks to  Infiniband  or Omni-Path technology, performing <span> at  100 Gbps (HDR).</span></p>
<h2>Table of available queues and nodes</h2>
<p>Table below summarises queue, nodes  and their characteristics</p>
<p> </p>
<table class="listing">
<thead></thead> 
<tbody>
<tr>
<th>Queue/Partition</th><th>
<p class="TableContents">Access Policy</p>
</th><th>Notes</th>
</tr>
<tr>
<td><strong>long</strong></td>
<td>All users</td>
<td>
<p class="TableContents">- Allows allocations of a maximum of 10 nodes for running parallel jobs.</p>
</td>
</tr>
<tr>
<td><strong>testing</strong></td>
<td>All users</td>
<td></td>
</tr>
<tr>
<td><strong>serial</strong></td>
<td>All users</td>
<td>
<p>- ONLY FOR single core, cpu (or task) jobs; Parallel or MPI jobs will NOT WORK.</p>
<p>- Up-to a maximum of 7 running independent serial jobs are allowed.</p>
<p>- Resources are over-subscribed (the nodes are shared among jobs and users).</p>
</td>
</tr>
<tr>
<td colspan="3">
<p class="TableContents" style="text-align: center; "><strong>OTHER Dedicated Queues</strong></p>
</td>
</tr>
<tr>
<td><strong>cmsp</strong></td>
<td>Special authorization needed</td>
<td></td>
</tr>
<tr>
<td><strong>esp</strong><br /></td>
<td>Special authorization needed</td>
<td></td>
</tr>
<tr>
<td>
<p><strong>gpu</strong></p>
<p> </p>
</td>
<td>Special authorization needed</td>
<td>
<div id="_mcePaste">- several GPU Accelerators are  available of the type:</div>
<div id="_mcePaste">Nvidia Tesla K40</div>
<div>Nvidia Tesla P100</div>
<div>- resources are over-subscribed (the nodes are shared among jobs.).</div>
</td>
</tr>
</tbody>
</table>
<h2>Table with technical details</h2>
<p> </p>
<table class="vertical listing">
<tbody>
<tr>
<th>
<p>Queue/</p>
<p>Partition</p>
</th><th>
<p>Max walltime</p>
(h)</th><th>Node range</th><th>Micro-architecture</th><th>Cores</th><th>
<p>Ram per core</p>
<p>(GB/c)</p>
</th><th>
<p>Total</p>
<p>nodes</p>
</th><th>
<p>Total</p>
<p>cores</p>
</th><th>
<p>Ram per node</p>
<p>(GB)</p>
</th>
</tr>
<tr>
<td>long</td>
<td style="text-align: right; ">24:00</td>
<td>
<p>node[61...92]</p>
</td>
<td>
<p>Cascade-Lake</p>
</td>
<td>
<p class="TableContents">16</p>
</td>
<td>
<p style="text-align: right; ">11.69</p>
</td>
<td>
<p class="TableContents" style="text-align: right; ">32</p>
</td>
<td style="text-align: right; ">512</td>
<td>
<p style="text-align: right; ">187</p>
</td>
</tr>
<tr>
<td><i>testing</i></td>
<td style="text-align: right; "><i>6:00</i></td>
<td><i>node[01...16]<br />node[161...184]<br />node[61...92]</i></td>
<td><i>see cmsp queue<br />see cmsp queue<br />see long queue</i></td>
<td>
<p class="TableContents"><i>-<br />-<br />-</i></p>
</td>
<td style="text-align: right; ">-<br />-<br />-</td>
<td style="text-align: right; ">-<br />-<br />-<br /><br /></td>
<td style="text-align: right; ">-<br />-<br />-</td>
<td style="text-align: right; ">-<br />-<br />-</td>
</tr>
<tr>
<td>cmsp</td>
<td style="text-align: right; ">24:00</td>
<td>
<p>node[01...16]<br /><span>node[161...184]</span></p>
</td>
<td>Broadwell<br />Broadwell</td>
<td>
<p class="TableContents">20<br />20</p>
</td>
<td style="text-align: right; ">
<p>6.4<br /><span>9.6</span></p>
</td>
<td style="text-align: center; ">
<p style="text-align: right; ">16<br /><span>24</span></p>
</td>
<td style="text-align: right; ">320<br />480</td>
<td style="text-align: right; ">
<p>128<br /><span>192</span></p>
</td>
</tr>
<tr>
<td>serial</td>
<td style="text-align: right; ">168:00</td>
<td>serial[01...02]</td>
<td>Sandybridge</td>
<td>16</td>
<td style="text-align: right; ">2</td>
<td style="text-align: right; ">2</td>
<td style="text-align: right; ">32</td>
<td style="text-align: right; ">32</td>
</tr>
<tr>
<td>esp</td>
<td style="text-align: right; ">24:00</td>
<td>node[21...56]</td>
<td>skylake-cascade</td>
<td>
<p class="TableContents">20</p>
</td>
<td style="text-align: right; ">6.4</td>
<td style="text-align: right; ">36</td>
<td style="text-align: right; ">720</td>
<td style="text-align: right; ">128</td>
</tr>
<tr>
<td>gpu</td>
<td style="text-align: right; ">24:00</td>
<td>
<p>gpu01<br /><span>gpu02</span></p>
</td>
<td>
<p>Broadwell+ 2*gp100<br /><span>Sandybridge+ 2*k40c</span></p>
</td>
<td>
<p>20<br /><span>16</span></p>
</td>
<td>
<p style="text-align: right; ">6.4<br /><span>2</span></p>
</td>
<td>
<p style="text-align: right; ">1<br /><span>1 </span></p>
</td>
<td>
<p style="text-align: right; ">16<br /><span>32</span></p>
</td>
<td>
<p style="text-align: right; ">128<br /><span>32</span></p>
</td>
</tr>
</tbody>
</table>]]></content:encoded>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>mverina</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2018-05-09T14:05:00Z</dc:date>
    <dc:type>Page</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/software-overview">
    <title>Software Overview</title>
    <link>https://argo-doc.ictp.it/software-overview</link>
    <description>Software Overview</description>
    <content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p class="p1"> </p>
<p class="p2"><strong>Rich software stack of MPI, Compilers, Libraries and Applications is installed on Argo. Several versions of a given software component are present if necessary. In order to choose the desired version , use  the <a class="internal-link" href="resolveuid/08a860d693fa4239b4a7779cd483f464">module</a> command.</strong></p>
<p>The list of available software components follows</p>
<pre>% module avail </pre>
<pre><div id="_mcePaste">------------------------------------------------------- /opt-ictp/ohpc/modulefiles/mpi --------------------------------------------------------</div>
<div id="_mcePaste">gnu-openmpi/boost/1.63.0            gnu-openmpi/netcdf/4.4.1.1     intel-mpi/2016                   openmpi/1.6.5/intel/2013</div>
<div id="_mcePaste">gnu-openmpi/fftw/3.3.4              gnu-openmpi/phdf5/1.8.17       openmpi/1.10.2/intel/2013        openmpi/gnu</div>
<div id="_mcePaste">gnu-openmpi/netcdf-cxx/4.3.0        gnu-openmpi/scalapack/2.0.2    openmpi/1.10.2/intel/2016        openmpi/intel</div>
<div id="_mcePaste">gnu-openmpi/netcdf-fortran/4.4.4    gnu-openmpi/scipy/0.19.0       openmpi/1.10.2/intel/2017 (D)</div>
<div id="_mcePaste"></div>
<div id="_mcePaste">------------------------------------------------------- /opt-ictp/ohpc/modulefiles/libs -------------------------------------------------------</div>
<div id="_mcePaste">boost/1.55         gmp/4.3.2/default-compiler        hdf5/intel          (D)    mpc/0.9/default-compiler           netcdf/4.3.1/intel/2013</div>
<div id="_mcePaste">boost/1.60  (D)    gmp/default-compiler              hdf5/1.8.12/intel          mpc/default-compiler               netcdf/4.4.1/intel/2017</div>
<div id="_mcePaste">cln/1.3.2          grib_api/intel             (D)    hdf5/1.8.19/intel          mpfr/3.0.0/default-compiler        plasma/2.6.0</div>
<div id="_mcePaste">esmf/7.0.0         grib_api/1.12.0/intel             iml/1.0.3/gnu/4.4.0        mpfr/default-compiler</div>
<div id="_mcePaste">gabriel/1.2        grib_api/1.23.1/intel             intel-mkl/2017             netcdf/intel                (D)</div>
<div id="_mcePaste"></div>
<div id="_mcePaste">---------------------------------------------------- /opt-ictp/ohpc/modulefiles/compilers -----------------------------------------------------</div>
<div id="_mcePaste">cuda/5.5        default-compiler    gcc/4.5.2        gnat/2014     intel/2013 (D)    nocomp          opencl/amd/2.8      pgi/16.10 (D)</div>
<div id="_mcePaste">cuda/6.5        g++_addon/4.9.0     gcc/4.6.2        gnu/5.4.0     intel/2016        opencl-amd      opencl/intel/1.2    pgi/18.4</div>
<div id="_mcePaste">cuda/9.1 (D)    gcc/4.4.7           gcc/4.9.0 (D)    gnu7/7.2.0    intel/2017        opencl-intel    pgi/10.9</div>
<div id="_mcePaste"></div>
<div id="_mcePaste">------------------------------------------------------- /opt-ictp/ohpc/modulefiles/apps -------------------------------------------------------</div>
<div id="_mcePaste">PHCpack/2.3.95         espresso/5.2.1                         gnu/numpy/1.11.1                     ncl/6.3.0/gnu/4.4.7</div>
<div id="_mcePaste">R/2.15          (D)    espresso/6.2.1                         gnu/openblas/0.2.19                  nco/intel                     (D)</div>
<div id="_mcePaste">R/3.1.2                ferret/v6.842                          gnu/openmpi/1.10.6                   nco/4.4.2/intel</div>
<div id="_mcePaste">R/3.3.1                gdal/1.7.3                             grads/binary                         nco/4.6.8/intel</div>
<div id="_mcePaste">R/3.5.1                gdal/2.2.4                             gromacs/5.0-rc1                      nwchem/6.5</div>
<div id="_mcePaste">alps/2.1.1             gdal/2.3.0                      (D)    gromacs/2016.3                (D)    proj/4.7.0</div>
<div id="_mcePaste">anaconda3/5.1.0        gdl/0.9.4/intel                        lhapdf/5.8.6/default-compiler        python/esp</div>
<div id="_mcePaste">cdo/1.9.0              gengetopt/2.22/default-compiler        maple/14                             python/2.7.13                 (D)</div>
<div id="_mcePaste">cp2k/2.4        (D)    glpk/4.45/gnu/4.4.0                    matlab/r2011                         root/5.30.04/default-compiler</div>
<div id="_mcePaste">cp2k/2.6.1             gnu/gsl/2.2.1                          ncl/5.2.1/gnu/4.4.7                  root/6.08.06</div>
<div id="_mcePaste">espresso/5.1    (D)    gnu/hdf5/1.8.17                        ncl/6.0.0/gnu/4.4.7</div>
<div id="_mcePaste"></div>
<div id="_mcePaste">---------------------------------------------------------- /opt/ohpc/pub/modulefiles ----------------------------------------------------------</div>
<div id="_mcePaste">autotools        (L)    cmake/3.9.2    gnu7/7.2.0     llvm5/5.0.0        papi/5.5.1    prun/1.1</div>
<div id="_mcePaste">clustershell/1.8        gnu/5.4.0      llvm4/4.0.1    ohpc        (L)    pmix/1.2.3    prun/1.2 (L,D)</div>
<div></div>
</pre>]]></content:encoded>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>mverina</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2018-05-09T14:05:00Z</dc:date>
    <dc:type>Page</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/storage-overview">
    <title>Storage Overview</title>
    <link>https://argo-doc.ictp.it/storage-overview</link>
    <description>Storage Overview</description>
    <content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p> </p>
<h3><strong>Storage:</strong></h3>
<table class="listing grid">
<tbody>
<tr>
<th>
<p>Mount point</p>
</th> <th>
<p>Function</p>
</th> <th>
<p>Implemented on</p>
</th> <th>
<p>Shared amongst nodes</p>
</th> <th>
<p>Note</p>
</th>
</tr>
<tr>
<td>
<p><strong>/home/&lt;username&gt; </strong> ­</p>
</td>
<td>
<p>User’s Home directory on Argo</p>
</td>
<td>
<p>NFS on high-performance   storage</p>
</td>
<td>
<p>yes</p>
</td>
<td>
<p>Quotas apply, based on groups and applications.</p>
</td>
</tr>
<tr>
<td>
<p><strong>/local_scratch</strong></p>
</td>
<td>
<p>scratch</p>
</td>
<td>
<p>Internal Hard Disk on the Worker node</p>
</td>
<td>
<p>no</p>
</td>
<td>
<p>Size is ~200GB per   node.</p>
<p>Space is not guaranteed.</p>
<p>Data is automatically cleaned 48 hours after a job   finishes.</p>
</td>
</tr>
</tbody>
</table>
<p> </p>
<h3><strong>Dedicated storage for ESP group:</strong></h3>
<p><strong>Only for members of the ESP group!<br /></strong></p>
<table class="listing grid">
<tbody>
<tr>
<th>
<p>Mount point</p>
</th> <th>
<p>Function</p>
</th> <th>
<p>Implemented on</p>
</th> <th>
<p>Shared amongst nodes</p>
</th> <th>
<p>Note</p>
</th>
</tr>
<tr>
<td>
<p><strong>/home/netapp-clima</strong></p>
</td>
<td>
<p> </p>
</td>
<td>
<p>NFS on high-performance   storage</p>
</td>
<td>
<p>yes</p>
</td>
<td>
<p>Quotas apply</p>
</td>
</tr>
<tr>
<td><strong>/home/netapp-clima-users1</strong></td>
<td></td>
<td>NFS on high-performance   storage</td>
<td>yes</td>
<td>Quotas apply</td>
</tr>
<tr>
<td>
<p><strong>/home/netapp-clima-scratch</strong></p>
</td>
<td>
<p>scratch</p>
</td>
<td>
<p>NFS on high-performance   storage</p>
</td>
<td>
<p>yes</p>
</td>
<td>
<p>Data is automatically cleaned</p>
</td>
</tr>
<tr>
<td>
<p><strong>/home/netapp-clima-shared</strong></p>
</td>
<td>
<p> </p>
</td>
<td>
<p>NFS on high-performance   storage</p>
</td>
<td>
<p>yes</p>
</td>
<td>
<p> </p>
</td>
</tr>
<tr>
<td>
<p><strong>/home/netapp-clima-cordex</strong></p>
</td>
<td>
<p> </p>
</td>
<td>
<p>NFS on high-performance   storage</p>
</td>
<td>
<p>yes</p>
</td>
<td>
<p> </p>
</td>
</tr>
<tr>
<td>
<p><strong>/home/netapp-clima-dods</strong></p>
</td>
<td>
<p> </p>
</td>
<td>
<p>NFS on high-performance   storage</p>
</td>
<td>
<p>yes</p>
</td>
<td>
<p> </p>
</td>
</tr>
<tr>
<td>
<p><strong>/home/clima-archive</strong></p>
</td>
<td>
<p> </p>
</td>
<td>
<p>NFS on archive storage</p>
</td>
<td>
<p>yes</p>
</td>
<td>
<p> </p>
</td>
</tr>
<tr>
<td>
<p><strong>/home/clima-archive2</strong></p>
</td>
<td>
<p> </p>
</td>
<td>
<p>NFS on archive storage</p>
</td>
<td>
<p>yes</p>
</td>
<td>
<p> </p>
</td>
</tr>
<tr>
<td>
<p><strong>/home/clima-archive3</strong></p>
</td>
<td>
<p> </p>
</td>
<td>
<p>NFS on archive storage</p>
</td>
<td>
<p>yes</p>
</td>
<td>
<p> </p>
</td>
</tr>
<tr>
<td>
<p><strong>/home/tompkins-archive</strong></p>
</td>
<td>
<p> </p>
</td>
<td>
<p>NFS on archive storage</p>
</td>
<td>
<p>yes</p>
</td>
<td>
<p> </p>
</td>
</tr>
</tbody>
</table>
<p> </p>
<ul>
<li>To      check quota usage, run the ‘quota’ command on archive storage, and ‘naquota’ on high-performance storage.</li>
</ul>]]></content:encoded>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>mverina</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2018-05-09T14:05:15Z</dc:date>
    <dc:type>Page</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/how-to-use-the-queue-manager">
    <title>How to use the queue manager</title>
    <link>https://argo-doc.ictp.it/how-to-use-the-queue-manager</link>
    <description>Basic information on how to submit jobs to the cluster</description>
    <content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<div class="document" style="padding-left: 0px; ">
<p style="padding-left: 0px; "> </p>
<p style="padding-left: 0px; ">ARGO cluster uses <a class="external-link" href="https://slurm.schedmd.com/"><strong>Slurm</strong> (https://slurm.schedmd.com/) </a>for  the  workload management, including the  queue management.</p>
<p class="callout" style="padding-left: 0px; ">Note: until May 2018, Argo was Torque based. To assist users with their old job scripts, we still support a limited set of Torque commands by using Torque/PBS wrappers for transition from Torque/PBS to Slurm.</p>
<p class="callout" style="padding-left: 0px; ">If you wish to quickly find new Slurm counterparts for Torque commands you know, please see any of the  <a class="external-link" href="https://slurm.schedmd.com/rosetta.html">Rosetta Stone of Workload Managers</a>,  <a class="external-link" href="https://hpc-uit.readthedocs.io/en/latest/jobs/torque_slurm_table.html">Translate PBS/Torque to SLURM</a> or <a class="external-link" href="https://www.glue.umd.edu/hpcc/help/slurm-vs-moab.html">Slurm vs Moab/Torque</a>.</p>
<p style="padding-left: 0px; ">The rest of this documentation will concentrate on Slurm environment.</p>
<ul>
<li>To <strong>submit</strong> a job just type:</li>
</ul>
<pre class="literal-block" style="padding-left: 1.5em; ">$ sbatch jobscript.sh
</pre>
<p style="padding-left: 0px; ">A simple jobscript could be:</p>
<pre class="literal-block" style="padding-left: 1.5em; ">$ cat jobscript.sh
#!/bin/bash 
#SBATCH -p testing # partition (queue) 
#SBATCH -N 1 # number of nodes 
~/fortran_code/test.x</pre>
<p style="padding-left: 0px; ">Each line beginning with <strong><tt class="literal docutils" style="padding-left: 0px; ">#SBATCH</tt> </strong>will be interpreted by the queue manager as options to give to the sbatch   command.</p>
<ul>
<li>A more complex example (containing some more useful options) follows:</li>
</ul>
<pre class="literal-block" style="padding-left: 1.5em; ">$ cat jobscript.sh
#!/bin/bash 
# 
#SBATCH --job-name=my-test  # job name
#SBATCH -p testing # partition (queue) 
#SBATCH -N 1 # number of nodes 
#SBATCH -n 2 # number of cores 
#SBATCH -t 0-2:00 # time (D-HH:MM) 
#SBATCH -o slurm.%j.out # STDOUT 
#SBATCH -e slurm.%j.err # STDERR 
#SBATCH --mail-type=ALL # I want e-mail alerting

#############
### This job's working directory
echo \"Working directory is $SLURM_SUBMIT_DIR\"
cd $SLURM_SUBMIT_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
# Run your executable
/home/username/fortran_codes/test_f90.x</pre>
<p style="padding-left: 0px; ">To check the queue for all the  jobs:</p>
<pre>$ squeue</pre>
<p>To check the queue for all  jobs belonging to one user:</p>
<pre>$ squeue -u &lt;user-id&gt;</pre>
<p>To check the queue for a given job id:</p>
<pre>$ squeue -j &lt;job-id&gt;</pre>
<div>
<div>To check the the history  for a given job id taht is not in the queue any more:</div>
<div></div>
</div>
<pre>$ sacct -j &lt;job-id&gt;</pre>
</div>
<div class="document" style="padding-left: 0px; ">To cancel a job:</div>
<div class="document" style="padding-left: 0px; ">
<pre class="document">$ scancel -j &lt;job-id&gt;</pre>
</div>
<div class="document" style="padding-left: 0px; "></div>
<div class="document" style="padding-left: 0px; ">To ask for an interactive session:</div>
<pre class="document" style="padding-left: 0px; "> $ srun -p long -N 1  --pty bash</pre>
<div class="document" style="padding-left: 0px; "></div>
<div class="document" style="padding-left: 0px; ">
<ul>
<li>The login nodes of Argo provide a command called <strong>showfree</strong> that helps you identify which resources are available (idle)  for immediate use before you submit a job. It is based on slurm sinfo command so you can use either:</li>
</ul>
<p style="padding-left: 0px; "> </p>
<pre style="padding-left: 0px; ">  $ showfree</pre>
or
<pre>$ sinfo -o '%10P %.5a %15l %6D %8t %15C %N' | egrep -v 'drain|down|alloc'</pre>
<pre><div id="_mcePaste">PARTITION  AVAIL TIMELIMIT       NODES  STATE    CPUS(A/I/O/T)   NODELIST</div>
<div id="_mcePaste"></div><div id="_mcePaste"></div><div id="_mcePaste">esp           up 1-00:00:00      11     idle     0/132/0/132     node[74-77,90-96]</div>
<div id="_mcePaste">esp1          up 1-00:00:00      1      idle     0/20/0/20       node102</div>
<div id="_mcePaste">gpu           up 1-00:00:00      2      idle     0/36/0/36       gpu[01-02]</div>
<div id="_mcePaste">serial        up 7-00:00:00      2      idle     0/32/0/32       serial[01-02]</div>
<div id="_mcePaste">testing       up 6:00:00         1      idle     0/16/0/16       testing02</div><div id="_mcePaste"></div>
<p> </p><div></div></pre>
<div class="document"></div>
<div class="document">If you do not want to use <strong>hyperthreading</strong>, use the option --hint=nomultithread  in your  srun/sbatch.</div>
<div class="document"></div>
<div class="document">Example:</div>
<div class="document"></div>
<pre class="document">#SBATCH --hint=nomultithread </pre>
</div>
<div class="document" style="padding-left: 0px; ">
<div></div>
</div>]]></content:encoded>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>mverina</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2018-05-09T14:05:00Z</dc:date>
    <dc:type>Page</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/using-201cmodule201d-command">
    <title>Using “module” command</title>
    <link>https://argo-doc.ictp.it/using-201cmodule201d-command</link>
    <description>The module command is used to load environment for various software, like compilers, MPI libreries etc.</description>
    <content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p style="padding-left: 0px; "> </p>
<p style="padding-left: 0px; ">The module command will let you load and unload the environment needed to run specific applications or use specific libraries. It is especially useful when you need to have different versions of the same software/library or the same software/library compiled against different compilers.</p>
<p style="padding-left: 0px; ">In our cluster, for instance, we have both GNU and Intel compilers, therefore, in order to use MPI when we compile using one of those compilers we need to have two different versions of OpenMPI, but instead of having binaries name like <i>mpicc-1.4.3-intel-11.1</i> or similar we use <i>module</i> to update the PATH environment variable, so that after loading the specific module for openmpi/intel we will just run <i>mpicc</i> and <i>mpirun</i>.</p>
<p style="padding-left: 0px; "><b>HOWTO</b></p>
<p style="padding-left: 0px; ">The syntax of the <i>module</i> command is the following:</p>
<pre style="padding-left: 0px; ">    module &lt;subcommand&gt; [&lt;arguments&gt;]</pre>
<p style="padding-left: 0px; ">&lt;subcommand&gt; may or may not require one or more &lt;arguments&gt;</p>
<p style="padding-left: 0px; ">A partial list of available subcommands follows:</p>
<ul style="padding-left: 0px; list-style-type: square; ">
<li style="padding-left: 0px; "><b>list</b> to list the currently loaded modules,</li>
<li style="padding-left: 0px; "><b>avail</b> to list all the available modules</li>
<li style="padding-left: 0px; "><b>load</b> to load one or more modules</li>
<li style="padding-left: 0px; "><b>unload</b> to unload one or more modules</li>
<li style="padding-left: 0px; "><b>purge</b> to unload <i>all</i> the currently loaded modules</li>
</ul>
<p style="padding-left: 0px; ">A detailed list of all subcommands is shown by running the command <span style="padding-left: 0px; "><b>module help</b></span></p>
<p style="padding-left: 0px; "><span style="padding-left: 0px; ">The output of the <b><i>module avail</i></b> command could be something like:</span></p>
<pre style="padding-left: 1.5em; ">$ module avail<br /><br />---------------- /opt-ictp/ohpc/modulefiles/mpi ----------------<br /> gnu-openmpi/boost/1.63.0<br /> gnu-openmpi/fftw/3.3.4<br /> gnu-openmpi/netcdf-cxx/4.3.0<br /> gnu-openmpi/netcdf-fortran/4.4.4<br /> gnu-openmpi/netcdf/4.4.1.1<br /> gnu-openmpi/phdf5/1.8.17<br /> gnu-openmpi/scalapack/2.0.2<br /> gnu-openmpi/scipy/0.19.0<br /> intel-mpi/2016<br /> openmpi/1.10.2/intel/2013<br /> openmpi/1.10.2/intel/2016<br /> openmpi/1.10.2/intel/2017        (D)<br /> openmpi/1.6.5/intel/2013<br /> openmpi/gnu<br /> openmpi/intel<br /><br />--------------- /opt-ictp/ohpc/modulefiles/libs ----------------<br /> boost/1.55<br /> boost/1.60                  (D)<br /> cln/1.3.2<br /> esmf/7.0.0<br /> gabriel/1.2<br /> gmp/4.3.2/default-compiler<br /> gmp/default-compiler<br /> grib_api/intel              (D)<br /> grib_api/1.12.0/intel<br /> grib_api/1.23.1/intel<br /> hdf5/intel                  (D)<br /> hdf5/1.8.12/intel<br /> hdf5/1.8.19/intel<br /> iml/1.0.3/gnu/4.4.0<br /> intel-mkl/2017<br /> mpc/0.9/default-compiler<br /> mpc/default-compiler<br /> mpfr/3.0.0/default-compiler<br /> mpfr/default-compiler<br /> netcdf/intel                (D)<br /> netcdf/4.3.1/intel/2013<br /> netcdf/4.4.1/intel/2017<br /> plasma/2.6.0<br /><br />------------- /opt-ictp/ohpc/modulefiles/compilers -------------<br /> cuda/5.5                intel/2013       (D)<br /> cuda/6.5                intel/2016<br /> cuda/9.1         (D)    intel/2017<br /> default-compiler        nocomp<br /> g++_addon/4.9.0         opencl-amd<br /> gcc/4.4.7               opencl-intel<br /> gcc/4.5.2               opencl/amd/2.8<br /> gcc/4.6.2               opencl/intel/1.2<br /> gcc/4.9.0        (D)    pgi/10.9<br /> gnat/2014               pgi/16.10        (D)<br /> gnu/5.4.0               pgi/18.4<br /> gnu7/7.2.0<br /><br />--------------- /opt-ictp/ohpc/modulefiles/apps ----------------<br /> PHCpack/2.3.95<br /> R/2.15                          (D)<br /> R/3.1.2<br /> R/3.3.1<br /> R/3.5.1<br /> alps/2.1.1<br /> anaconda3/5.1.0<br /> cdo/1.9.0<br /> cp2k/2.4                        (D)<br /> cp2k/2.6.1<br /> espresso/5.1                    (D)<br /> espresso/5.2.1<br /> espresso/6.2.1<br /> ferret/v6.842<br /> gdal/1.7.3<br /> gdal/2.2.4<br /> gdal/2.3.0<br /> gdal/2.3.1                      (D)<br /> gdl/0.9.4/intel<br /> gengetopt/2.22/default-compiler<br /> glpk/4.45/gnu/4.4.0<br /> gnu/gsl/2.2.1<br /> gnu/hdf5/1.8.17<br /> gnu/numpy/1.11.1<br /> gnu/openblas/0.2.19<br /> gnu/openmpi/1.10.6<br /> grads/binary<br /> gromacs/2016.3<br /> lhapdf/5.8.6/default-compiler<br /> maple/14<br /> matlab/r2011<br /> ncl/5.2.1/gnu/4.4.7<br /> ncl/6.0.0/gnu/4.4.7<br /> ncl/6.3.0/gnu/4.4.7<br /> nco/intel                       (D)<br /> nco/4.4.2/intel<br /> nco/4.6.8/intel<br /> nwchem/6.5<br /> proj/4.7.0<br /> proj/5.1.0                      (D)<br /> python/esp<br /> python/2.7.13                   (D)<br /> root/5.30.04/default-compiler<br /> root/6.08.06<br /><br />------------------ /opt/ohpc/pub/modulefiles -------------------<br /> autotools        (L)    gnu7/7.2.0         papi/5.5.1<br /> clustershell/1.8        llvm4/4.0.1        pmix/1.2.3<br /> cmake/3.9.2             llvm5/5.0.0        prun/1.1<br /> gnu/5.4.0               ohpc        (L)    prun/1.2   (L,D)<br /><br /> Where:<br /> D:  Default Module<br /> L:  Module is loaded<br /><br /></pre>
<div>
<div>The syntax used for module names is usually:program_name/program_version/compiler_name/compiler_version.</div>
</div>
<div></div>
<p style="padding-left: 0px; ">To load, e.g., the openmpi module for the intel compiler you can then run <b>module load openmpi/intel.</b> If you then run the command <b>module list</b> you will see:</p>
<pre style="padding-left: 1.5em; "><p style="padding-left: 1.5em; ">$ module load openmpi/intel</p><p style="padding-left: 1.5em; ">$ module list<br />Currently Loaded Modulefiles:<br />1) intel/2013      2) openmpi/intel</p></pre>
<p style="padding-left: 0px; ">as expected, the intel module has been loaded automatically since openmpi requires it.</p>]]></content:encoded>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>mverina</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2018-05-09T14:05:00Z</dc:date>
    <dc:type>Page</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/sharebox/viewgit">
    <title>Viewgit</title>
    <link>https://argo-doc.ictp.it/sharebox/viewgit</link>
    <description></description>
    
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Clement Onime</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2014-09-30T16:40:00Z</dc:date>
    <dc:type>Window</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/sharebox/test">
    <title>test</title>
    <link>https://argo-doc.ictp.it/sharebox/test</link>
    <description>Hello</description>
    
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Clement Onime</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2014-09-30T16:05:09Z</dc:date>
    <dc:type>Folder</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/sharebox/admin">
    <title>Admin</title>
    <link>https://argo-doc.ictp.it/sharebox/admin</link>
    <description></description>
    
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>opadmin</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2013-12-16T09:38:27Z</dc:date>
    <dc:type>Window</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/sharebox/news">
    <title>Notizie</title>
    <link>https://argo-doc.ictp.it/sharebox/news</link>
    <description>Notizie del sito</description>
    
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>opadmin</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2013-12-16T09:31:09Z</dc:date>
    <dc:type>Folder</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/sharebox/news/aggregator">
    <title>Notizie</title>
    <link>https://argo-doc.ictp.it/sharebox/news/aggregator</link>
    <description>Notizie del sito</description>
    <content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[]]></content:encoded>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>opadmin</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2013-12-16T09:31:09Z</dc:date>
    <dc:type>Collection</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/sharebox/events">
    <title>Eventi</title>
    <link>https://argo-doc.ictp.it/sharebox/events</link>
    <description>Eventi del sito</description>
    
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>opadmin</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2013-12-16T09:31:09Z</dc:date>
    <dc:type>Folder</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/sharebox/events/aggregator">
    <title>Eventi</title>
    <link>https://argo-doc.ictp.it/sharebox/events/aggregator</link>
    <description>Eventi del sito</description>
    <content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[]]></content:encoded>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>opadmin</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2013-12-16T09:31:10Z</dc:date>
    <dc:type>Collection</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/sharebox/Members">
    <title>Collaboratori</title>
    <link>https://argo-doc.ictp.it/sharebox/Members</link>
    <description>Contenitore delle cartelle personali dei collaboratori.</description>
    
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>opadmin</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2013-12-16T09:31:10Z</dc:date>
    <dc:type>Folder</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/sharebox/sparklesharebiarrow.png">
    <title>Sparkleshare icon</title>
    <link>https://argo-doc.ictp.it/sharebox/sparklesharebiarrow.png</link>
    <description>Sparkleshare icon for up-to-date shares</description>
    
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Clement Onime</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2014-10-01T06:33:36Z</dc:date>
    <dc:type>Image</dc:type>
  </item>


  <item rdf:about="https://argo-doc.ictp.it/sharebox/front-page">
    <title>Welcome</title>
    <link>https://argo-doc.ictp.it/sharebox/front-page</link>
    <description>The sharebox server, located at sharebox.ictp.it, provides a dropbox-like service for ICTP account-holders that supports file-sharing/collaboration with both ICTP and non-ICTP collaborators.

It is not a replacement for FTP or other services aimed at the transfer of large files or the regular/periodic backup of important documents.

</description>
    <content:encoded xmlns:content="http://purl.org/rss/1.0/modules/content/"><![CDATA[<h2></h2>
<h2>
<p> </p>
</h2>
<h2>Client Side software (Both ICTP and non-ICTP account holders)</h2>
<p>The service uses the SparkleShare client software from <a href="http://www.sparkleshare.org/">http://www.sparkleshare.org/</a> You may also obtain a copy of the client software using the links below.</p>
<ul>
<li>Click on<a class="external-link" href="https://sharebox.ictp.it/sparkleshare-windows.msi"> Windows (Vista,7,8 and above)</a> to download. Note: Windows XP is not supported: To install, simply download and double click on the installer.</li>
<li>Click<a class="external-link" href="https://sharebox.ictp.it/sparkleshare-mac.zip"> Macintosh OS/X</a>: to download the zip file. Open the downloaded zip-file, then drag-and-drop the Sparkleshare folder from inside the zip-file to the "Applications" folder of your system Hard-disk.</li>
<li>
<p><span class="visualHighlight"><b>ICTP Desktop in Linux</b></span>: To install run the command <code>sudo ictp-install sparkleshare</code></p>
</li>
<li>
<p><span class="visualHighlight"><b>Ubuntu and similar distributions such as LinuxMint</b></span>: To install use the command <code>sudo apt-get install sparkleshare</code></p>
</li>
<li>
<p><a href="http://www.sparkleshare.org/">Other Linux versions</a> may have to compile the source code available from <a href="http://www.sparkleshare.org/">http://www.sparkleshare.org/</a></p>
</li>
</ul>
<h3>Client Usage</h3>
<p>Please NOTE that for synchronisation to work correctly,  the names of files, documents, directories,  folders and shared-folders <b>MUST NOT </b>include the following characters:</p>
<pre><code><b>&lt;
&gt;
:
|
?
*
\<br />/</b></code></pre>
<p>Also,  "hidden" files (files that begin with a dot character) are not synchronised.</p>
<p>After, installation, you will need to manually start the client software. Once it is running (indicated by an icon <img class="image-inline" title="Sparkleshare icon" src="https://argo-doc.ictp.it/sharebox/sparklesharebiarrow.png" alt="Sparkleshare icon" />on the task-bar) following the steps below to:</p>
<ul>
<li><b>Set-up automatic start of the Sparkleshare client software</b></li>
<ul>
<li>For Windows platform: Use the "Local Services" utility from System group of the Windows Control Panel.</li>
<li>For Mac OS/X: Use "Login Items" from the "Accounts" section of "Systems Preferences".</li>
<li>For Linux  KDE Desktop: Use the "AutoStart" from "StartUp and Shutdown" section of the KDE "Systems Settings" utility</li>
<li>For Linux GNOME Desktop or Ubuntu Unity Desktop: Use the "Startup Applications" utility..</li>
</ul>
<li><b>Identifing the client ID of your device:</b></li>
<ul>
<li>Right-click on the SparkleShare client icon</li>
<li>Select "SparkleShare" from the pop-up menu</li>
<li>Select "Client ID" from the drop-down (new) menu</li>
<li>Select "Copy to Clipboard" </li>
</ul>
<li><b>Communicating the client ID of a device to the shared-folder owner:</b></li>
<ul>
<li>Open your favourite e-mail client</li>
<li>Compose a a new e-mail message addressed to the owner (ICTP account holder) of the shared-folder you wish to access</li>
<li>Paste the "Client ID" from the Clipboard into the body of the message. DO NOT REFORMAT THE BODY.</li>
<li>Add a proper Subject to the  message and send it.</li>
</ul>
<li><b>Connecting to a shared-folder:</b></li>
<ul>
<li>The shared-folder owner will send you the Address and Remote path of the shared folder. This information is similar to </li>
<ul>
<li>Example Address:  ssh://<i>{name</i>}@sharebox.ictp.it</li>
<li>Example Remote path: /share/<i>{name}</i>/Shared-folder-name</li>
</ul>
<li>Right-click on the SparkleShare client icon</li>
<li>Select "SparkleShare" from the pop-up menu</li>
<li>Select "Add Hosted Project.." from the drop-down menu</li>
<li>Click  "On my own server" </li>
<li> Type the provided address and remote-path into the right locations. NOTE: These information are CASE-SENSITIVE.</li>
<li>Enable the "Fetch prior history" check-box</li>
<li>Click on "Add"</li>
</ul>
<li><b>Accessing and using a shared-folder</b></li>
</ul>
<ul>
<ul>
<li>Right-click on the SparkleShare client icon</li>
</ul>
<ul>
<li>Select or click the <i>shared-folder</i> from the pop-up menu to open it in explorer or file-manager window.</li>
<li>Drag-and-drop the file, document or folder to be shared into the shared-folder explorer or file-manager window. NOTE: it is possible to edit documents and files within this window. Creating copies or renaming of files and folders inside the shared-folder is supported but may require an additional operation for complete synchronisation.</li>
<li>NOTE: Always access the folder via the Sparkleshare task-bar icon in order to avoid possible problems.</li>
</ul>
</ul>
<ul>
<li><b>Basic troubleshooting errors, support, assistance or help</b></li>
<ul>
<li>The Sparkleshare client utility will indicate errors such as failed synchronisation by placing an exclamation mark (!) in it's task-bar icon. Right-clicking on the task-bar icon would provide more details about the error and sometimes a way to retry or resolve the problem. Synchronisation errors are typically limited to single/individual devices.</li>
<li>Internally the Sparkleshare utility relies on a folder within your home-directory named "SparkleShare" for correct operation, please do not make manual changes to this folder or delete it's contents..</li>
<li>In case of difficulties, please contact ICTS staff on ext 999 or via the web from <a class="external-link" href="http://icts.ictp.it/help/">http://icts.ictp.it/help/</a></li>
</ul>
</ul>
<h2>Administering or creating shared-folders (ICTP account holders only)</h2>
<p>For ICTP account holders. Simply login to the <a class="internal-link" href="https://argo-doc.ictp.it/sharebox/admin">admin page</a> and create a share (project) to begin. The following procedures are available only after you Login to the Sharebox <a class="internal-link" href="https://argo-doc.ictp.it/sharebox/admin">admin page</a>. Before anyone can access your share, you will have to grant them access. Access control is based on a "Client ID" , which is unique for each device. That is the "Client ID" for a laptop is different from the that of a desktop, even if they are both used exclusively by the same individual. This implies that you will require a separate authorization for your home computer, different from your laptop and different from your desktop/office computer.</p>
<ul>
<li><b>Creating a Shared-folder:</b></li>
<ul>
<li>Click on the "Shares" TAB</li>
<li>Click on the "Add" button</li>
<li>In the new pop-up dialog box Type in a "<b>Name</b>" for the shared-folder. NOTE: you can use any name for the share apart from the characters listed above.</li>
<li>and a one-line brief "<b>Description</b>" for the Shared Folder.  NOTE: Do not fill in Address or Remote Path.</li>
<li>Click on the "Save" button</li>
</ul>
<li><b>Obtaining the address/remote path information for a shared folder</b></li>
<ul>
<li>Click on the "Shares" TAB</li>
<li>Click on the name of the shared-folder </li>
<li>Copy and paste , one after the other,  the "<b>Addess</b>" and "<b>Remote path</b>" information from the new pop-up dialog box into the body of an email message addressed to your collaborator. NOTE: These information are CASE-SENSITIVE.</li>
<li>Add a proper Subject to the email message and send it.</li>
<li>Finally, click "Cancel" to close the pop-up dialog box.</li>
<li><i>NOTE: You can delete or remove a shared-folder by clicking (selecting) the <b>Delete</b> check-box before clicking "Save". This will not automatically delete copies of documents contained in the shared-folder from devices.<br /></i></li>
</ul>
<li><b>Granting access to your shared-folder.</b> </li>
<ul>
<li> Once you receive a key from your collaborator, you can add keys using the steps below:                                               
<ul>
<li>Click on the "ClientID (keys)" TAB</li>
<li>Click on the "Add" button</li>
<li>Type in a short name in the "<b>username/HOST-ID</b>" field</li>
<li>Copy and paste in the "Client ID" from the body of the received email into the "<b>KEY</b>" field.</li>
<li>Click on the "Save" button</li>
</ul>
<i>NOTE: You can delete or remove access by clicking (selecting) the <b>Delete</b> check-box before clicking "Save".  <i>This will not automatically delete copies of documents accessed previously from devices, it just prevents access to subsequently updated versions.</i></i></li>
</ul>
<li><b>Browsing your Shared-folder from the WEB</b></li>
<ul>
<li>Click here for<a class="internal-link" href="https://argo-doc.ictp.it/sharebox/viewgit"> web-based access</a> to your shared folder. This interface allows the owner of the shared-folder to access/download current and previous versions of stored files, documents and folders.</li>
</ul>
</ul>
<ul>
<li><b>Obtaining support, assistance or help</b></li>
<ul>
<li>The default space available to each ICTP account holder on sharebox.ictp.it is 5GB. If you require more space please send your request to ICTS.</li>
<li>You may create as many shared-folders as you wish. Note that each shared-folder may have completely different sets of collaborators.</li>
<li>In case of requests, comments or difficulties, please contact ICTS staff on ext 999 or via the web <a class="external-link" href="http://icts.ictp.it/help/">http://icts.ictp.it/help/</a></li>
</ul>
</ul>
<ul>
</ul>
<h2>
<p><i><b>Thank you and enjoy the sharebox.ictp.it service.</b></i></p>
</h2>]]></content:encoded>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>opadmin</dc:creator>
    <dc:rights></dc:rights>
    <dc:date>2014-10-01T07:11:57Z</dc:date>
    <dc:type>Page</dc:type>
  </item>




</rdf:RDF>
