This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revisionLast revisionBoth sides next revision | ||
| computing:crab [2019/10/24 17:14] – [List of configuration parameters] iwn | computing:crab [2019/10/24 17:16] – [CRAB2] iwn | ||
|---|---|---|---|
| Line 2: | Line 2: | ||
| ====== CRAB3 ====== | ====== CRAB3 ====== | ||
| - | See [[https:// | + | See |
| + | * Tutorial: https:// | ||
| + | * Configuration: | ||
| + | * Commands: https:// | ||
| + | * Example: | ||
| + | \\ \\ | ||
| ====== CRAB2 ====== | ====== CRAB2 ====== | ||
| + | CRAB2 has been superseded by CRAB3. | ||
| ===== Setup local environment ===== | ===== Setup local environment ===== | ||
| In order to submit jobs to the Grid, you must have an access to a LCG User Interface (LCG UI). It will allow you to access WLCG-affiliated resources in a fully transparent way. Then, the setup of the CMSSW software and the source of the CRAB environment should be done with this order. Remember to create a proxy certificate for CMS | In order to submit jobs to the Grid, you must have an access to a LCG User Interface (LCG UI). It will allow you to access WLCG-affiliated resources in a fully transparent way. Then, the setup of the CMSSW software and the source of the CRAB environment should be done with this order. Remember to create a proxy certificate for CMS | ||
| Line 46: | Line 51: | ||
| ===== CRAB configuration file for Monte Carlo data ===== | ===== CRAB configuration file for Monte Carlo data ===== | ||
| - | The CRAB configuration file (default name crab.cfg) should be located at the same location as the CMSSW parameter-set to be used by CRAB with the following content: | + | The CRAB configuration file (default name '' |
| < | < | ||
| [CMSSW] | [CMSSW] | ||
| Line 75: | Line 80: | ||
| ===== Analyse published results ====== | ===== Analyse published results ====== | ||
| To analyse results that have been published in a local DBS you may use a CRAB configuration identical to any other, with the addition that you must specify the DBS | To analyse results that have been published in a local DBS you may use a CRAB configuration identical to any other, with the addition that you must specify the DBS | ||
| - | instance to which the data was published, datasetpath name of your dataset and the dbs_url. To do this you must modify the [CMSSW] section of your CRAB configuration file, e.g. | + | instance to which the data was published, datasetpath name of your dataset and the '' |
| < | < | ||
| [CMSSW] | [CMSSW] | ||
| Line 82: | Line 87: | ||
| dbs_url=url_local_dbs | dbs_url=url_local_dbs | ||
| </ | </ | ||
| - | <wrap important> | + | Note: As '' |
| Writing: https:// | Writing: https:// | ||
| Reading: http:// | Reading: http:// | ||
| Line 93: | Line 98: | ||
| https:// | https:// | ||
| - | <wrap important> | + | Note: This type of jobs cannot be used to process a dataset that is not on the tier3. The network connection to the T3 is not fast enough to sustain a useful write speed in the stage-out step and the jobs will fail in the very end - i.e. when trying to copy the results. |
| ===== Non local jobs ===== | ===== Non local jobs ===== | ||