Tue Jul 02 10:09:09 2024 INFO: Fill completed in 9min 6secs
Gerardo confirmed a good fill curbside.
WP 11948
Symmetra batteries on the MSR UPS unit were replaced. All batteries installed are now the V66 version. The 16 batteries on bank 1 were replaced today. Bank 2 batteries were replaced on 2022. Verified unit showed all 32 batteries installed across both banks. Dave confirmed emails were being sent out from unit.
D. Barker, F. Clara. M. Pirello, R. McCarthy
Looked at the wind fences this morning. No new damage. EY has a yoke (from an old repair) that broke a couple months back on the one panel we didn't replace, first attached image. This hasn't gotten any worse. EX still looks okay, second image.
Last week I did a major rewite of the CDS HW reporting EPICS IOC to catch any future DAC FIFO errors which are currently not being shown in the models' STATE_WORD. Part of that rewrite was to obtain all the front end data from the running DAQ configuration hash.
Today I released a new version which:
TITLE: 07/02 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 4mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.06 μm/s
QUICK SUMMARY:
As I was arriving the IFO was just getting back into Nominial_Low_Noise at 14:26 UTC>
Took Observatory mode to Calibration to the Inlock charge measurements.
Everything seems to be currently functioning just fine.
Workstations were updated and rebooted. These were OS package updates. Conda packages were not updated.
TITLE: 07/02 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Oli
SHIFT SUMMARY: High winds this evening have kept H1 down for the past 4 hours; gusts peaked around 45mph. Since they've calmed down a bit, I've started locking H1 and it's just reached ENGAGE_ASC_FOR_FULL_IFO.
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:08 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | Ongoing |
01:29 | CAL | Francisco, Neil | PCal Lab | n | Getting equipment | 02:01 |
These are the times for the continuation of the project discussed in 78734. Unfortunately, we again lost lock before I finished.
Start of change (GPS) |
End of change (GPS) |
ITMX bias at start (V) |
ITMX bias at end (V) |
ITMY bias at start (V) |
ITMY bias at end (V) |
1403912193 |
1403912517 |
0 |
20 |
0 |
-20 |
1403914518 |
1403914684 |
20 |
-19 |
-20 |
19 |
Lockloss @ 00:59 UTC - link to lockloss tool
Ends lock at 4h45m. I suspect the cause was wind; gusts have recently hit up to 45mph and there have been lots of glitches in the past hour or two.
A continuation on the previous measurements where we seen a drop in green power at 34 Celcius in both SHG 1 and 2 in a cavity setup.
To exclude the effect of optical cavity, we took off the front mirror of the cavity and took the double pass measurements under slightly different beam alignment. Then the single pass measurement is done by removing the rear mirror. The single pass measurement result fits the model, but not the double pass (not sure why).
Rebuild SHG 2, and measure the phase matching condition with lower pump at 10mW (in comparison all previous measurement were done with 60mW), and the same drop in power is seen in 34 Celcius.
Conclusion: the sinc curve phase matching measurement is only reliable when done in single pass setup. The measurement done in cavity setup is not definitive in diagnosing crystal condition.
FAMIS 20704
The NPRO has had a couple of sudden power jumps in the past week, seen also by output of AMP1 and less so from AMP2. The AMP LD powers didn't move at the same time as these jumps.
PMC transmitted power continues to fall while reflected power increases; when we swap the PMC tomorrow morning this should finally be put to rest.
Jason's brief incursion last Tuesday shows up clearly in environment trends.
TITLE: 07/01 Eve Shift: 2300-0500 UTC (1600-2200 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: SEISMON_ALERT
Wind: 16mph Gusts, 11mph 5min avg
Primary useism: 0.07 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY: H1 has been locked and observing for just over 3 hours. ITM bias change tests while observing have resumed for the afternoon.
TITLE: 07/01 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 158Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY:
IFO is in NLN and OBSERVING as of 20:17 UTC (3 hr xx min lock)
Very calm rest-of-shift after a speedy <1 hr 30 min lock acquisition.
Other:
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
16:08 | SAF | LVEA | LVEA | YES | LVEA IS LASER HAZARD | 10:08 |
14:39 | FAC | Karen | Optics, Vac Prep | N | Technical Cleaning | 15:06 |
15:51 | PEM | Robert | LVEA | YES | Viewport Compensation Plate Experiment | 18:48 |
16:04 | FAC | Karen | MY | N | Technical Cleaning | 17:17 |
16:04 | FAC | Kim | MX | N | Technical Cleaning | 16:44 |
16:59 | FAC | Eric | EX Chiller | N | Glycol static and residual pressure check | 17:41 |
17:02 | FAC | Tyler | MX, MY, EX, EY | N | Glycol Check | 17:41 |
17:31 | FAC | Chris | EY | N | HVAC Work | 18:31 |
20:00 | FAC | Karen | Optics Lab | Local | Technical Cleaning | 20:00 |
20:01 | PCAL | Francisco | Optics Lab | Local | Testing relays | 21:52 |
21:47 | SQZ | Terry, Kar Meng, Camilla | Optics Lab | Local | SHG Work | 22:16 |
FAMIS 21042
pH of PSL chiller water was measured to be just above 10.0 according to the color of the test strip.
Naoki, Camilla
We continued the ZM4 PSAMS scan with hot OM2 in 78636. We changed ZM4 PSAMS strain voltage from 7.5V to 5.5V and saw the squeezing and range improvement as shown in the first attachment. The second attachment shows the squeezing imrovement. The squeezing level is 5.2dB around 2 kHz.
We also tried 4.5V ZM4 PSAMS, but it is worse than 5.5V. So we set the ZM4 PSAMS at 5.5V. The ZM5 PSAMS strain voltage is -0.78V.
Everytime we changed ZM4 PSAMS, we compensated the ZM4 pitch alignment with slider and ran SCAN_ALIGNMENT_FDS and SCAN_SQZANG.
Calibration sweep taken today at 2106UTC in coordination with LLO and Virgo. This was delayed today since we were'nt thermalized at 1130PT.
Simulines start:
PDT: 2024-06-29 14:11:45.566107 PDT
UTC: 2024-06-29 21:11:45.566107 UTC
GPS: 1403730723.566107
End:
PDT: 2024-06-29 14:33:08.154689 PDT
UTC: 2024-06-29 21:33:08.154689 UTC
GPS: 1403732006.154689
I ran into the error below when I first started the simulines script, but it seemed to move on. I'm not sure if this frequently pops up and this is the first time I caught it.
Traceback (most recent call last):
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _
bootstrap
self.run()
File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in r
un
self._target(*self._args, **self._kwargs)
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 427, in generateSignalInje
ction
SignalInjection(tempObj, [frequency, Amp])
File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 484, in SignalInjection
drive.start(ramptime=rampUp) #this is blocking, and starts on a GPS second.
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 122, in start
self._get_slot()
File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 106, in _get_sl
ot
raise AWGError("can't set channel for " + self.chan)
awg.AWGError: can't set channel for H1:SUS-ETMX_L1_CAL_EXC
One item to note is that h1susex is running a different version of awgtpman since last Tuesday.
This almost certainly failed to start the excitation.
I tested a 0-amplitude excitation on the same channel using awggui with no issue.
There may be something wrong the environment the script is running in.
We haven't made any changes to the environment that is used to run simulines. The only thing that seems to have changed is that a different version of awgtpman is running now on h1susex as Dave pointed out. Having said that, this failure has been seen before but rarely reappears when re-running simulines. So maybe this is not that big of an issue...unless it happens again.
turns out i was wrong about the environment not changing. according to step 7 of the ops calib measurement instructions, simulines has been getting run in the base cds environment...which the calibration group does not control. That's probably worth changing. In the meantime, I'm unsure if that's the cause of last week's issues.
The CDS environment was stable between June 22 (last good run) and Jun 29.
There may have been another failure on June 27, which would make two failures and no successes since the upgrade.
The attached graph for June 27 shows an excitation at EY, but no associated excitation at EX during the same period. Compare with the graph from June 22.
On Jun 27 and Jun 28, H1:SUS-ETMX_L2_CAL_EXCMON was excited during the test.
A few weeks back Oli put in an alog about IMC and SRM M3 saturations during earthquake lock losses. In April of 2023 Gabriele had redesigned the M1-M3 offload, reducing the gain of the offload somewhat to get rid of 3-ish hz instabilities in the corner cavities. This may have contributed to the SRM saturations that Oli found, so we want to try adding some low frequency gain back into the SRM offloading.
Sheila wrote out the math for me for the stability of both the SRCL loop and the stability of the M1-M3 crossover, so I have been looking at ways to increase the low frequency gain without affecting the stability above 1hz. The open look gain for SRCL looks like :
SRCL_OLG = SRCL_sens * SRCL_FIlter * (SRM_M3_PLANT+SRM_M1_LOCK_FILTER * SRM_M1_PLANT);
The SRM M1-M3 offloading looks like:
SRM_OFFLOAD_OLG = SRM_M1_LOCK_FILTER * SRM_M1_PLANT * SRCL_sens * SRCL_Filter / (1-SRM_M3_PLANT * SRCL_sens * SRCL_FILTER);
Both of these are (or behave like) open loop gains (g), so the suppression/gain peaking can be show by looking at the 1/(1-g) for each.
I made a 50mhz boost filter for this and tried it during the commissioning window this morning. Bode plots for the boost (red), boost *m1 lock filters(blue) and the nominal M1 lock filter(green) are shown in the first image). The affect on M3 drives and SRCL are shown in the second image asds, live traces are with the new boost, refs are without. There is good reduction below 100mhz, but there is gain peaking from the M1-M3 offloading at .2-.4hz. which might bleed into the secondary microseism during the winter. I'm working on a filter with similar gain, but less gain peaking in a region that won't affect the overall rms of the m3 drive as much. I will try installing and testing during maintenance tomorrow.
These are some of the design plots I have been using. First image is the M1-M3 cross-over, red is the Mar 2023 filter that may have been causing 3ish hz instabilities, solid blue is the filter that Gabriele installed at that time, dashed purple is the boost I tried this morning, and dotted yellow is a modified boost that I want to try tomorrow. Second plot is the suppression for each filter. The .2-.3 hz gain peaking I saw during the test this morning is easy to see in the dashed purple on the second plot, I think the dashed yellow will have less gain peaking and move it closer to .7-1hz where it won't affect the rms of the M3 drive as.
The new boost is installed on SRM and ready to try when we get a chance. Attached image shows the bode plots for the boost filter (red), boost*nominal M1 filters(blue), and the nominal M1 filters (green). I think we might try to test these on Thursday.
I tested this new boost yesterday, it works well, so I'm adding engaging FM5 to the ISC_DRMI guardian. Will post a log with the results in a bit.