VOMS deployment issues
Last Update: 2005-03-23
CERN background information:
Software versions we run today on the CERN SL3
VOMS server [VOs configured: ALICE,ATLAS,CMS,DTEAM,LHCb,SIXT,
SEEGrid] https://voms.cern.ch:8443/edg-voms-admin/[VOname]/index.html (load
your certificate to open this URL)
- edg-voms-admin
v0.7.6-1, server side is v0.7.6 by K.Lorentey from Budapest
University.
- edg-voms Version: 1.3.7, Compiled: Feb 4 2005 14:41:46 by
V.Ciaschini/V.Venturi from CNAF
- edg-mkgridmap version 2.1.1 by F.Spataro from Parma
- Globus as packaged in VDT1.2.0rh9 (Globus 2.4.3 + patches).</o:p>
Software versions we run today on the CERN RH7
VOMS server [VOs configured: ALICE,ATLAS,CMS,DTEAM,LHCb] https://lcg-voms.cern.ch:8443/edg-voms-admin/[VOname]/index.html (load
your certificate to open this URL)
- edg-voms-admin v0.7.3, server side is v0.7.6 by K.Lorentey
- edg-voms Version: 1.3.1, Compiled: Feb 4 2005 14:41:46 by
V.Ciaschini/V.Venturi
- edg-mkgridmap version 2.1.1 by F.Spataro
- Globus as packaged in VDT1.1.13rh7gcc3 (Globus 2.4.3 + patches).
NB! This machine will be re-installed with SL3 and will become the (FNAL) VOMRS
test host for the LHC Experiment VOs.
Summary on product status:
We are reasonably happy with the recent VOMS installation on voms.cern.ch
(SL3), thanks to our YAIM experts'help.
To repeat the exercise in the future, we need the
provision of a pre-packaged "voms-server" rpm distribution, as
sorting out the dependencies is quite difficult.
In any case, as one can see in the pre-selected savannah
searches
there are many pending
bugs, tasks and submitted patches for the next LCG2 release: http://cern.ch/dimou/lcg/voms/savannah_entries_on_VOMS_VOMRS.html
This savannah selection covers projects 'lcgoperations', 'jra1' and 'jra3'.
However, this report and the
VOMS deployment documentation we wrote only covers experience
from installation and testing of the software available in the LCG CVS. As
there is different code in the various CVS repositories it is difficult to
know which bugs might affect us as well but are registered under a different
area.
The existence of code in multiple CVSs is a big problem today. It even creates
operation problems within a given VO. The USATLAS sub-group of the ATLAS VO
can't be
used by our
american
colleagues, due to inter-operability problems between code on the server
voms.cern.ch and the rest of Open Science Grid (OSG) software environment.
Related to VOMS-core (server and client) code maintained at CNAF:
Vincenzo.Ciaschini@cnaf.infn.it (VOMS core service development) is the main
developer. Vincenzo was ill for months and Valerio Venturi was replacing him
as the product maintainer. We had a difficult situation while their software
passed through versions 1.3.1 to
1.3.7 (on the LCG CVS)
within
a few days
due
to intermediate
problems
with
dependencies
and some omissions in the tag that made it temporarily un-installable. At this
moment we run the latest version on voms.cern.ch and there seems to be no problem
but we didn't stress performance tests.
We need:
- An Oracle-based vomsd promised to be delivered at the end of April
2005 (savannah
ticket)
- A stable options' syntax for the voms-proxy-init command (it changed
with version 1.3.7) to avoid user support issues and documentation discrepancies.
- The harmonisation of version numbers across CVS repositories (infnforge,
lcg and egee). We spent a lot of time trying to understand what each package,
in each repository includes.
Related to upcoming LCG2 Release integration issues:
The vomses directory that comes with LCG2 2_3_0 points to the old
CERN RH7 VOMS server (bug
#6983). This directory is important because it is
used
to check valid
membership in a VO when users run the voms-proxy-init command.
Our recommendation for LCG2 2_4_0 is to use (LCG CVS version numbering):
- voms-admin 0.7.6
- voms 1.3.7
- edg-mkgridmap 2.4.1
- perform tests with the new SL3 VOMS server voms.cern.ch and no more with
the RH7 one (lcg-voms.cern.ch alias of tbed0152.cern.ch).
grid-map file for mixed environment (LDAP and VOMS-based VOs co-exist):
The users shouldn't be asked to register in 2 different places, one for
Usage Rules acceptance (lcg-registrar) and another one for entering the VO
(VOMS). I started this discussion with the CDF VO example on 13 Dec 2004
copying all interested
parties.
Meanwhile the Joint Security Policy Group (JSPG) decided to turn
the (so far common) Usage Rules into a minimal common set of Rules as needed
by the sites and a separate per VO Acceptabe Use Policy. The VOMS or
VOMRS registration interfaces should be equiped with the exact Acceptabe
Use Policy of each specific VO and VOMS-only-based VO members
should get into the grid-map file despite the fact that lcg-registrar (auth
line
in edg-mkgridmap.conf) doesn't contain them.
This required a edg-mkgridmap script kindly contributed by its original
author Fabio Spataro (patch
#324). We received mid-February 2005 edg-mkgridmap 2.4.0 that
had a bug with SSL connections to VOMS databases. edg-mkgridmap 2.4.1
is available
now, probably not sufficiently tested.
Support
issues for other VO server managers:
We helped the BIOMED VO manager to set-up
a VOMS server by providing a step-by-step documentation and answering questions
by email and phone. It will be difficult to diagnose installation and configuration
problems in
different
and remote
system environments
in
the
future, especially when Karoly Lorentey, the voms-admin developer leaves
CERN in April 2005.
Notes from a meeting with managers
of VOs using lcg-registrar.cern.ch via
https://lcg-registrar.cern.ch/cgi-bin/register/account.pl
for Usage Rules' acceptance and VO membership registration can be found
here.
Support issues for LHC experiment VO managers:
The User Registration Task Force (TF) (Mandate)
aims at making tools compliant to the new GDB-approved User
Registration Requirements by extending their functionality according
to the (also GDB-approved) proposal. The
LHC Experiment VO candidate members and VO managers will not use voms-admin,
i.e. the what looks like https://voms.cern.ch:8443/edg-voms-admin/[VOname]/index.html but
they will use (FNAL-developed) VOMRS, i.e. what looks like https://hotdog62.fnal.gov:8443/vo-LCG/vomrs (load
your certificate to open these URLs)
The actual underlying VODB will still be VOMS. The VOMRS interface will be
tailored to link to the CERN HR db (ORGDB) according to the GDB requirement.
Problems:
- The "LHC VO special" VOMRS code is written but the ORGDB link is not
tested.
- ORGDB data owners don't allow the VOMRS developers to test with real data,
due to db privacy reasons.
- By the time testing with fake data will be completed Karoly, who also wrote
the ORGDB link modules, will have left CERN.
- User 'suspension' field and notification is not yet tested in VOMRS.
- User periodic (yearly or less, depending on contract type) 'expiration' and
user/VOmanager notification is not yet tested in VOMRS.
Please read the notes
of our monthly check-point meetings for details on the
TF progress.
Support issues for other VO managers:
The voms-admin distribution on the EGEE CVS repository looks different to
the one on the LCG one and, actually, contains different code. We need to use
voms-admin for DTEAM and SixT VO management. We won't be able to provide support
to VOs which use the EGEE code. Important Security Requirements for user registration
are not yet implemented in voms-admin namely:
- User 'suspension' field and notification.
- User periodic (yearly or less, depending on contract type) 'expiration' and
user/VOmanager notification.
A reliable VOMS service at CERN for DTEAM and LHC experiment VOs:
In November 2004, wishing to replace our VOMS server by a high-availability
service, with proper backup and system monitoring, we wrote the Functional
requirements for a centrally managed VOMS service as requested by the
IT DataBase group. This request is refused by them, so far, on the basis that
they only support Oracle dbs.
We are updating the Functional requirements' document now, attempting
to get support from the IT/FIO group. The outcome of
this is still unclear but the VOMS introduction in the LCG2 releases is there.
VOMS being a very visible service (the VOMS server must be responding
everytime a user of the VOs it hosts types voms-proxy-init),
it must be available, performant and reliable.
Maria Dimou,
IT/GD Grid Infrastructure Services