UZH CCD Testing Setup (DAMIC-M / DAMIC)
This TWiki page is devoted to the functional description of the CCD testing setup at the Physics Institute of the University of Zurich and its associated hardware/software. A description of the cryostat, the associated electronics, and the instrumentation infrastructure is provided. The setup is based on the AlpineCube cryostat apparatus, an in-house design by the University of Zurich's Peter Robmann, and is currently located in building 36, floor H, room 78.
Please register to be able to modify this page.
Below you find a list of tools that are/were developed at Zurich for the DAMIC-M experiment.
Tool |
---|
cRIO |
Front-End-Board |
Device | Power | Network | Switch port | Output connections | |||
---|---|---|---|---|---|---|---|
Output 1 | Output 2 | Output 3 | Output 4 | ||||
Server | Multiprise | Switch 1/UZH Network Eth1.1 | UZH Network (eth4) | ACM (eth5) | Damic Intranet (eth6) LTA (eth7) | ||
R&S HMP 2030 | Multiprise | Switch 1 | Eth5.1 | LTA (12V) | Leach Fontend (+5V) | Leach Fontend (-5V) | N/A |
R&S HMP 2030 | Multiprise | Switch 1 | Eth8.1 | ACM (-15V) | ACM (+15V) | ACM (-30V) | N/A |
Keithley 2470 SourceMeter | Multiprise | Switch 1 | Eth4.1 | Leach Front end VSUB | N/A | ||
VME8004X | PDU | N/A | N/C | ACM | N/C | ||
Lakeshore 335 | Multiprise | DS-700 | N/A | PT100 Cryohead | PT100 Sample holder | N/A | |
Netgear GS 108 (1) | Multiprise | Server 2 | N/A | N/A | |||
Netgear GS 108 (2) | Multiprise | Switch 1 | N/A | N/A | |||
Netio Power PDU 4PS | Multiprise | Switch 1 | Eth3.1 | VME8004X | Orca Cryocooler | Solenoid Valve | Vacuum Pump |
Single Gauge TPG 361 | Multiprise | Switch 1 | Eth2.1 | PKR 251 | N/A | ||
LTA | HMP4040 | Server 3 | Server 3 | N/A | |||
Leach System | Multiprise | N/A | ARC-66 left | ARC-66 right | N/A | ||
ACM | VME | Server 4 | Server 4 | N/A | |||
HiCube Vacuum Pump | PDU | Moxa Serial LAN | N/A | AlpineCube | |||
Cryocooler | PDU | N/A | Cryostat | N/A | |||
Bürkert W26A Solenoid Valve | PDU | N/A | N/A | ||||
DS-700 | Multiprise | Switch 1 | Eth7.1 | Lakeshore 335 | N/C | N/A | |
Moxa Serial LAN | Multiprise | Switch 1 | Eth6.1 | HiCube Vacuum Pump | N/A |
This table shows the maximal power consumption of our system. Some manufacturer only provided a VA measurement, for conversion we assumed a efficency factor of 0.8.
Instrument | Power Concumption |
---|---|
R&S HMP2030 | 300 W |
R&S HMP2030 | 300 W |
Keithley 2470 SourceMeter | 220 VA |
Lakeshore 335 | 210 VA |
RS300-E7/PS4 | 350 W |
Netgear GS 108 | 10 W |
NETIO PowerPDU 4PS | 5 W |
TPG 361 | 50 VA |
Moxa Serial LAN | 1 W |
DS-700 | 5 W |
Bürkert 0211 Solenoid Valve | 4 W |
Pfeiffer HiCube 30 | 170 W |
ORCA Mixed Refrigerant Cooler | 750 VA |
Leach Readout | 40 W |
VME Crate (WV8004XVME00) | 450 W |
Total | 3726 W |
Device | IP | Netmask | Gateway | DNS | Hostname | Username | Password |
---|---|---|---|---|---|---|---|
Server Intranet | 192.168.200.1 | 255.255.255.0 | 192.168.200.1 | 130.60.164.1 | CCDTest | damic | 1rChEL |
Server Internet | 10.65.117.42 | 255.255.255.0 | |||||
Power PDU 4PS | 192.168.200.11 | 255.255.255.0 | 192.168.200.1 | 130.60.164.1 | PowerPDU-CF | admin | 1rChEL@PDU |
Single Gauge | 192.168.200.12 | 255.255.255.0 | 192.168.200.1 | 130.60.164.1 | N/A | ||
Serial-LAN Adapter Vacuum | 192.168.200.13 | 255.255.255.0 | 192.168.200.1 | 130.60.164.1 | admin | 1rChEL@Serial | |
HVPSU Kethley | 192.168.200.14 | 255.255.255.0 | 192.168.200.1 | K-2470 | admin | 1rChEL@HVPSU | |
LVPSU R&S HMP2030 | 192.168.200.15 | 255.255.255.0 | 192.168.200.1 | admin | 1rChEL@LVPSU | ||
LVPSU R&S HMP2030 | 192.168.200.16 | 255.255.255.0 | 192.168.200.1 | admin | 1rChEL@LVPSU | ||
USB-LAN adapter Lakeshore | 192.168.200.17 | 255.255.255.0 | 192.168.200.1 | 130.60.164.1 | Lakeshore | admin | 1rChEL |
LTA | 192.168.133.7 | admin | 1rChEL@USB24 | ||||
ACM | 192.168.1.5 |
The main computer is an Asus RS300-E7/PS4 (RS300-E7-PS4/WOCPU/WOMEN/WOHDD) 1U single CPU model. The main processor is a single 6th generation Intel(R) Xeon(R) CPU E31220 @ 3.10GHz with 4 cores while the graphics subsystem is handled by an Nvidia Quadro 600 graphics card with 1 GB GDDR2 and 96 CUDA cores on a PCI Express x16 Gen2 bus. Four 4 GB DDR3 1333 ECC UDIMMs are instated for a main memory (RAM) of 16 GB with a maximum of 32 GB supported in a 4 x 8 GB dual channel configuration.
Server Specifications | ||||||
---|---|---|---|---|---|---|
Server 1 | Server 2 | |||||
Case Model | RS300-E7/PS4 | Product Page | Manual | RS300-E7/PS4 | Product Page | Manual |
Mainboard | ASUS P8B-E/4L | Product Page | Manual | ASUS P8B-E/4L | Product Page | Manual |
Chipset | Intel C204 chipset | N/A | Intel C204 chipset | N/A | ||
CPU | Xeon E31290 v2 @ 3.70GHz | Xeon E31270 v2 @ 3.50GHz | ||||
Socket Type | LGA1155 | LGA1155 | ||||
RAM | 4 x 8 GB DDR3 1600 ECC UDIMM | 4 x 8 GB DDR3 1600 ECC UDIMM | ||||
Bios Version | Version 6702 (latest) | Latest Version | Version 6702 (latest) | Latest Version | ||
Graphics Card | Nvidia Quadro 600, 1 GB GDDR2 | N/A | N/A | N/A | ||
Network | Quad Intel® GbE LAN controllers with IPv6 | Quad Intel® GbE LAN controllers with IPv6 | ||||
Hard Drive | 4 x Hitachi Ultrastar 7K4000 0F14683 (4TB) | Datasheet |
For storage, a 4 x 4TB hot-swap HDDs array, configured in RAID10 mode is used with the on-board intel RAID controller (Intel Rapid Storage Technology). This setup enables simultaneous writing to two hard drives (RAID1) and data mirroring to another two (RAID0). This configuration enhances writing speed and maintains a constant backup of the entire system. Each of the four 4TB hard drives contributes to an effective storage capacity of 8TB.
The system does not support UEFI, and access to full storage capacity is limited to operating systems supporting GPT architecture (>2TB volumes). The lack of UEFI limits Windows to MBR-only versions, and UNIX-type operating systems are recommended. For the operating system, Ubuntu 20.04.6 LTS was chosen due to its compatibility with LabVIEW (Q3 2022), the LTA and our image acquisition software. The default package manager is apt, while rpm is also present. For further information about our server please read the following pages.
There are multiple Software's, programs and other files installed on this server a full list can be found following this link: Software
Our instruments can be loosely grouped into 5 different groups:
All implementation of CCD readout systems have a common architecture and comprised of four common functional elements: The picture below shows the generall outline of a CCD.
Three different readout options for skipper CCDs exist. The first is the leach system, developed by: Astronomical Research Cameras. The second option is a Low Threshold Acquisition (LTA) system, developed by Fermi lab for the Oscura experiment. The third option is using the ACM board, developed by the University of Chicago and LPNHE specifically for the DAMIC-M experiment. Information for each system are available in the following pages:
The following pages display various information about the CCD's used in this setup: