Displaying reports 681-700 of 77247.Go to page Start 31 32 33 34 35 36 37 38 39 End
Reports until 20:43, Wednesday 03 July 2024
H1 General
ryan.crouch@LIGO.ORG - posted 20:43, Wednesday 03 July 2024 - last comment - 20:55, Wednesday 03 July 2024(78854)
OPS Wednesday EVE shift update

Currently relocking at TRANSITION_FROM_ETMX

Following a lockloss at LASER_NOISE_SUPPRESION, XARM started having PDH issues and looked extremly fuzzy. ALS_X PDH kept locking and unlocking. After a bit of letting guardian try and adjusting ETMX, TMSX and getting no gains (the DIFF beatnote was -14) I dropped to do a MANUAL_IA thinking that the input align might still be off. I ended up adjusting IM3 and IM4 again which reduced the fuzz and reduced the PDH unlockings frequency, touching PR2 further stabilized it. After lots of trying to walk the IMs, PR2, and PR3 (reverted after no gains) and XARMs signals still not being able to converge I skipped ahead to INPUT_ALIGN and ran into the same issue from earlier today at ACQUIRE_XARM_IR and I had to move IM4 more to more center it on POPA which got the automation to kick in and do the rest. I finished the IA to be safe and then I was able to lock ALS (XARM is still fuzzy but not nearly as much and the PDH didn't unlock) and move on finally.

I wonder if the DARK offsets need to be updated based off  alog76982 which describes similar behavior to what I saw during this IA, light on LSC-TR_X when there shouldn't be during ACQUIRE_XARM_IR.

Images attached to this report
Comments related to this report
ryan.crouch@LIGO.ORG - 20:55, Wednesday 03 July 2024 (78855)

Lost lock LASER_NOISE_SUPPRESIOIN again right after the ISS 2nd loop turns on. XARM looks bad after each lockloss as well.

H1 ISC
camilla.compton@LIGO.ORG - posted 17:32, Wednesday 03 July 2024 - last comment - 19:00, Wednesday 03 July 2024(78845)
ISCTEX Beatnote alignment improved

Jennie, Sheila, Keita, Oli, Daniel, Camilla

After observed temperature dependence 78703, and ALS X beatnote intermittency dropping 7880678745 and this mornings PSL fiber realignment 78839, the beatnote was -30dB.

Jenne, then Sheila, then myself went onto ISCTEX (photo) and measured: 

We started with a -60dB peak on the spectrum analyzer, using the amplifier's "-1dB Mon" channel. We plugged the spectrum analyzer straight into the BBPD and saw a very similar signal. 

We then plugged into the amplifier's "+13dB " channel that's being used as the output. Beatnote ws clearer and around -40dB, photo. 

Jenne and Sheila improved this by touching ALS-M1 and ALS-M2 (layout D1800270) we needed ot walk these two mirrors to make improvements. We also adjusted ALS-M3 as the DC power started to reduce on the BBPD. Once we got to -20dB on the spectrum anaylizer (in progress photo here), we plugged back in the main cable and using the ndscope beatnote, got back to -17dB. Although we are not happy, this was much better than the -30dB we started with and the PLL can lock. Leaving here for the day. Expect that if we had plugged back into the MON channel or straight into the BBPD we'd have seen -40 to -35dB, but we didn't...

We had a spare BBPD (EE shop ISC cabinet) but didn't try it. Daniel's view is that the coming and going bad signals is a sign or an amplifier or other electronics failing. 

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 17:35, Wednesday 03 July 2024 (78849)

Reduced beatnote minimum to -40 so we shouldn't have overnight issues.

Images attached to this comment
daniel.sigg@LIGO.ORG - 19:00, Wednesday 03 July 2024 (78853)

There is a power adjustment stage after the beam that measured 62mW of laser power. The nominal power on the broadband PD should be around 2mW. This is what the FIBER_A_DC_POWER is indicating as well. If 0.5mW exits the fiber, about 0.2 to 0.25mW should make it to the broadband PD. This would result in a 10-12% jump, when the fiber is blocked.

The expected beat note is sqrt(2mW * 0.2mW)=0.6mW, if the mode overlap is maximum. With a ~2K transimpedance gain and a 0.09A/W efficiency, we should see a beat note strength of -6dBm minus the overlap mismatch. The RF preamplifier nominally adds +13dB of gain. However the expected +7dBm would be about 2dB above the 1dB compression point of the RF amplifier.

H1 CDS
david.barker@LIGO.ORG - posted 17:30, Wednesday 03 July 2024 (78848)
HAM2 Camera FOM markup for cameras which have been removed

I have put labels in front of the MC2, PRM and PR2 camera blue-boxes to show that these cameras have been temporarily removed as part of viewport work.

Images attached to this report
H1 ISC
francisco.llamas@LIGO.ORG - posted 17:27, Wednesday 03 July 2024 (78606)
Measurement of sensing function under DHARD changes

LouisD, SheilaD, FranciscoL

On Thursday, June 13 2024, we made simuline measurements after changing DHARD pitch and yaw gains. We see a change in the sensing function from a 50% increase of the DHARD pitch and yaw gains.

Routine calibration measurements done at 2024-06-13 15:46 UTC (78409). We increased (in magnitude) H1:ASC-DHARD_Y_GAIN from -40 to -60, and H1:ASC-RPC_DHARD_Y_GAIN from -30 to -45 -- a 50% increase -- and ran a second calibration measurement at 2024-06-13 16:56 UTC. The trend dhard_gain.png shows the values from these channels within the calibration measurements time window. The DHARD gains were reverted around 2024-06-13 17:40 UTC.

Plotting the transfer function of both measurements on out.png shows a significant change in the sensing function in the low frequency (<10 Hz). The uncertainty is below 2% for most of both measurements as seen in unc.png. These plots were produced using compare_measurements.py (full path: /ligo/home/francisco.llamas/CAL/sensing_function/20240621_plot_sensing_measurements/compare_measurements.py). We had to run git config --global safe.directory='*' where the '*' should be the changed by the name of the report that the script was unable to find.

Follow up analysis and plots to come.

Images attached to this report
H1 PSL
jason.oberling@LIGO.ORG - posted 17:00, Wednesday 03 July 2024 (78814)
H1 PSL PMC Swap, Detailed Version (WP 11947)

J. Oberling, R. Short, J. Driggers

Summary available at this alog.

Background

Our PMC Reflected Power (PMC Refl) has been slowly increasing for several months, roughly starting in mid-March.  There was an unexplained PMC Refl increase near the beginning of the year, but things were mostly calm until the middle of February; we were able to correct this with a remote alignment tweak on Feb 27, 2024, but this is the final time an alignment tweak made any real improvement.  We had been able to recover PMC Refl to just under 17.0 W by slightly tweaking the operating currents of the amplifier pump diodes back in April (changing beam quality but maintaining output power), but this did not stop the slow increase in PMC Refl.  At the end of May Ryan S. and I went into the enclosure and made some measurements around the PMC.  We found the visibility measured at ~91%, while our PMC throughput was down around ~83%; we also inspected the optics between Amp2 and the PMC and found no issues except some residue on mirror M11.  We drag wiped this mirror until it was clean but this caused no change in PMC behavior.  This indicates a potential issue with the PMC itself, and is very similar to the symptoms LLO saw with the original, glued PMCs back in the O1/O2 era.  Oddly enough, after our incursion PMC Refl jumped by ~1.3W, from ~19.7W to ~21.0W, and we could again find no explanation for this jump.  Since we could find no other cause for the slow increase in PMC Refl (not alignment, not mode matching, not dirty optics, etc.) we decided at the time to watch how things developed, with the understanding we would likely have to swap in the spare soon (mystery loss was already at almost 8%).  The PMC Refl increase flattened out for a couple weeks but started increasing again in the latter half of June, and at a quicker rate than previous, so we decided it was time to swap in the spare; Tuesday morning, right before we started the spare swap, we found PMC Refl had already increased past 23 W (Ryan took that last set of trends during his EVE shift on 7/1).

The Swap

We began by taking a transfer function of PMC SN007 (the old PMC), shown in the first picture.  The UGF is a little high (the new PMCs like to be at ~1kHz), but I recall this being the best we could do at the time (we don't have enough electronic gain control to get the UGF to 1kHz by itself, we have to lower the light level on the locking PD to lower the optical gain; this has to be balanced with the dark voltage on the PD, making sure the locked voltage is above the dark voltage).  All in all the TF looks normal.  We then shut off ISS and FSS, unlocked the PMC, turned off the PMC temperature control loop, and turned off the PMC PZT high-voltage power supply.  Time to go into the enclosure.

Once inside we used the High Power Attenuator assembly to lower the power incident on the PMC to 100mW, the typical power level used for alignment; this was done as a "just in case" for laser safety reasons.  We then unwrapped and inspected the new PMC; it is SN004.  Everything, for the most part, looked really good.  We did find a small bit of something on the main output mirror (2nd picture), but were able to successfully drag wipe it away without leaving and residue that we could see (3rd picture).  We did have to move the insulated cover for the PMC's terminal block from SN007 to SN004 (to keep someone from accidentally touching the PZT high-voltage connector), and had to slightly re-route the PZT wires so they were not on the spot where the damping clamps sit.  That done we were ready to swap the PMCs.  The new PMC fit nicely into the magnetic base and everything looked nice and even.  We installed the damping clamps with the bolts only finger tight to start, plugged in the 2 wires (voltage supply for the heater and PZT), turned on the PZT high-voltage power supply, and proceeded with recovering the PMC.  Pictures 4 through 6 show, in order, the old PMC SN, the new PMC SN, and the new PMC freshly installed on the PSL table.

The Recovery

To get enough light on the PMC locking PD to actually lock the PMC we have to raise the incident power to 10 W (and this is pushing it), so we did so.  On our first attempt to lock the PMC it would not lock, but we did see a small bit of TEM00 flashing through on the PSL Quad display as the PZT ramped.  We routed our spare Lemo cable bundle into the PSL enclosure so we could access the ramp signal, and fired up an oscilliscope to look at the signal on the locking PD.  Once we had it triggering on the PZT ramp we could see several smaller peaks flash through, but one slightly larger one that could be our TEM00.  I did an extremely small yaw adjustment on mirror M12 (my guess is less than 1/32 of a turn, as it barely moved under my fingers) and the suspected TEM00 peak immediately grew by a large amount.  We figured, why not, and tried to lock the PMC; it locked right up without issue, transmitting ~3.5 W out of the 10 W incident.  Success!  Or so we thought.

We fired up the picomotors on mirrors M11 and M12, and Ryan proceeded with tweaking alignment into the PMC.  However, the best he could get was 7.5 W out of the incident 10 W.  We measured the visibility at ~95%, while our power throughput was only 75%.  Well then.

We began checking everything we could think of: optic cleanliness, beam alignment, took a look at the control loop, PD alignment.  Ultimately, we found the beam was misaligned on the locking PD.  Once this was fixed Ryan was able to get PMC Trans up to 8.4 W, but that was it.  So while better, we still had a throughput of only 84%, and still could find no explanation for the discrepancy between visibility and power throughput.  At this point we called Jenne for a consult, get the whole PSL team together.  She suggested we check to make sure our locked PD voltage was above our dark voltage, and sure enough it wasn't.  I increased the light on the PD until the locked voltage was about double the dark voltage, but this didn't improve the ~10% discrepancy between visibility and power throughput.  At this point we figured we had 3 options:

Ultimately, we decided on option 3.  Assuming we were able to get to full power (spoiler alert, we were able), this would then give us a valuable piece of information: Does the new PMC exhibit the same behavior as the old PMC (the slow increase in PMC Refl)?  If it does, then the problem likely isn't with the PMC (although I still struggle to explain the discrepancy between visibility and power throughput in this scenario).  At this point we had already overrun the maintenance window by an hour, so we began increasing the power incident on the PMC in steps, checking mirrors for bright spots along the way; we did this slowly out of an abundance of caution, as we did not want to damage this PMC (we don't have a spare for our spare).  At each step we measured the visibility (this time properly accounting for the 15 mV of dark voltage on this PD) and power throughput; we only tweaked alignment at the first 10W step and at the final full power step.  Power In and Power Out were both measured with our roving water-cooled power meter; the locking PD unlocked voltage was adjusted to approximately -1 V at each power up (the locking PD outputs a negative voltage).  The steps:

Power In (W) Power Out, Initial (W) Power Out, Final (W) Visibility (%) Power Throughput (%)
10.0 3.5 8.4 94.8 84
20.0 17.3 17.3 96.0 86.5
30.0 26.2 26.2 95.0 87.3
40.1 34.7 34.7 96.1 86.5
60.1 52.2 52.2 96.1 86.9
80.2 68.7 68.7 94.9 85.7
100.0 84.9 84.9 93.7 84.9
128.1 104.4 105.1 89.6 82.0

At 20 W incident power I noticed a bright spot forming on the PMC input mirror.  I kept an eye on it as we powered up, but at 80 W it started to make me uncomfortable so I did a quick drag wipe of that mirror to see if the spot went away.  It did not and the PMC behavior was completely unchanged.  Ah well.

To end the PMC recovery we took a TF of the new PMC, as shown in the 7th and final picture.  Intially we had a UGF of ~2.5 kHz, but this was with our locking PD unlocked voltage still set at 1V; at this setting we had ~ -110 mV on the PD while the PMC was locked, so plenty of room to lower the optical gain. In the end we had a locked voltage of -0.061 V and an unlocked voltage of -0.453 V; this is a visibility of 86.5%, so we definitely have some more optimization to do here (PMC transmission was unchanged after lowering the light on the locking PD).  I did not notice this yesterday, but the TF of the new PMC starts getting noisy around 500 Hz, while the old one was pretty smooth all the way to 100 Hz; I also did not notice the difference in y-axis scales until just now as I was uploading these pictures.  More indication that we have further optimization work to do here.  Finally, we measured PMC Trans and PMC Refl with our roving water-cooled power meter and recalibrated the PD readings in the PMC MEDM; these measured at 105.1 W and 23.0 W respectively.  At this point the new PMC was up and running, so we started recovering the other PSL subsystems (chiefly the FSS RefCav).

We started by trying to lock the RefCav, and it would not lock.  Dreading the results, I looked at the alignment into the RefCav on our handy alignment iris and yup.  Alignment was off.  We informed Jenne that RefCav alignment was off, indicating the PMC output was at a slightly different alignment than the old one.  This means beam alignment into the IMC (and out of the IMC, as it turns out) and onto ISCT1 was also going to be off, especially as these are both 10s of meters from the PMC (the RefCav is only a few meters from the PMC).  Our best guess as to the cause is the mirrors of the new PMC being at slightly different angles compared to the old; this also explains the small yaw shift I had to do to initially lock the new PMC.  So Jenne and Sheila worked on recovering the IMC and ISCT1, respectively, while Ryan and I started realigning the RefCav.  At this point we were almost 3 hours past maintenance end, so we settled on getting the FSS working under a best effort basis with the understanding we would have to go back in during the next maintenance window to completely tune it up (we have more PMC optimization to do so we already have to go back in on July 9th anyway).  In the end we had a single-pass diffraction efficiency for the FSS AOM of 71.7%, a double-pass diffraction efficiency of 69.0%, and a RefCav TPD of just over 0.7 V; we have 263.0 mW input to the FSS AOM and 130.3 mW incident on the RefCav.  Not great, but it's working.  We forgot, however, to tweak the alignment into the ALS/SQZ fiber pickoff; Sheila was able to temporarily bypass that by lowering locking thresholds for both ALS and SQZ subsystems, and Jenne and I went into the enclosure this morning and tweaked that alignment.

This concludes yesterday's PMC swap saga.  I'm keeping WP 11947 open as we still have optimization to do on the new PMC; we also still have to completely tune the FSS path.  During this week we will keep an eye on PMC Refl to see if this new PMC exhibits the same behavior as the old one.

Images attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 16:37, Wednesday 03 July 2024 (78844)
Wednesday Ops Shift End

TITLE: 07/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
The IFO has been unlocked all day due to a HAM2 DAC failure and for PSL and ALS adjustments left over from yesterday's maintenence. 
The Beat note at End X was touched up by Jenne Camilla and Sheila. This started in the morning and ended shortly before 4.
We started an Initial Alignment, but got stuck in Find X arm IR.


LOG:                                                                                                                             

Start Time System Name Location Lazer_Haz Task Time End
16:08 SAF LVEA LVEA YES LVEA IS LASER HAZARD 10:08
15:10 FAC Karen Optics Lab N Technical cleaning 15:46
15:33 PSL Jason & Jennie PSL Enclosure YES Touching up alignment of ALS FC2 17:16
17:19 PEM Robert Carlos Milly Along arm Y No Testing a Seismometer in the dirt 20:19
17:21 ISC Richard LVEA HAM6 Yes Checking out some racks near HAM6 17:31
17:24 ASL Jenne, Sheila, Camilla, Ollie X Arm Yes Transitioning to LASER HAZARD to adjust ALS 21:01
18:07 PCAL Karen & Francisco PCAL Lab Yes Technical cleaning & escort. 18:22
21:03 ALS Camilla LVEA Yes Getting power meter 21:22
21:14 VAC Gerardo FTCE N Getting parts and checking Vacuum 21:41
21:23 ASL Camilla, Jenne, Sheila EX Yes Realigning ALS X fiber 23:30
21:29 CDS Jonathan fil CER N Unplugging cat 5 cables to restart cameras 22:32
23:10 PEM Robert LVEA Yes Setting up shaker 23:30

 

H1 General
ryan.crouch@LIGO.ORG - posted 16:03, Wednesday 03 July 2024 - last comment - 18:56, Wednesday 03 July 2024(78843)
OPS Wednesday eve shift start

TITLE: 07/03 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 12mph Gusts, 10mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 18:56, Wednesday 03 July 2024 (78852)

I had a lockloss at LASER_NOISE_SUPPRESION but now XARMs PDH keeps locking and unlocking and looks very fuzzy and the DIFF beatnotes bad, -14

Images attached to this comment
H1 General
anthony.sanchez@LIGO.ORG - posted 12:48, Wednesday 03 July 2024 (78841)
Wednesday Ops Mid Shift

TITLE: 07/03 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
CURRENT ENVIRONMENT:
    SEI_ENV state: SEISMON_ALERT
    Wind: 6mph Gusts, 4mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

After Erik replaced the HAM2 DAQ.
There was a timing error on another chassis h1sus2b,  which required a restart. see Eriks alog.

I then put all the Optics in safe mode to restart that chassis & reset the ISI and HPI watchdogs.
There was a DAC Kill i also had to reset to reset the ISI and HPI watchdogs.
We are now trying to get an initial alignment in. But the IMC is not locking. This may be due to the fact that the IMC is in the HAM  that tripped this morning.
Taking ISC_LOCK to IDLE

Trending the Sliders back before the earthquake for the IMC.
Camilla and I touched up MC1, MC3, PRM, IM2, IM4  back to the slider values before the tripped chassis and recovery and IM3 see Camilla's Alog.

Jason and Jenne went into the PSL room to adjust the ASL_FC2. When they came out Jenne said that an ASL adjustment at End X was next.
Jenne Sheila Camilla & Ollie are at EX working on ALS fiber alignment.

 

H1 CAL
louis.dartez@LIGO.ORG - posted 12:36, Wednesday 03 July 2024 (78840)
GDS FIR filter residuals differences between L1 and H1.
[Vlad, Louis]


We were taking a look at the GDS FIR Filter plots (pg. 21+) in the calibration reports at LHO (H1_calibration_report_20240601T183705Z.pdf) and LLO (L1_calibration_report_20240518T183037Z.pdf) side by side. 

We noticed several features in the GDS FIR filter comparison plots that we don't understand. 

1. res_corr_comparison.png: the LHO res corr comparison (I think this stands for "residual correction comparison") starts to run away at low frequencies (<8 Hz), while it's flat at LLO. 
2. ratio_res_corr_comparison.png: LHO's "Ratio of Res Corr comparison" plot has a low frequency ripple that is not present in LLO's reports.
3. ratio_res_corr_no_cc_pole_comparison.png: same as above for the "Ratio of Res Corr No CC Pole Comparison" plots
4. ratio_tst_corrections_comparison.png: There are resonances present in LHO's "Ratio of TST corrections comparison" plots that 1.) don't appear in LLO's reports and 2.) don't match up with the violin modes at 500 Hz and 1kHz. The same is true for the PUM (ratio_pum_corrections_comparison.png) and UIM (ratio_uim_corrections_comparison.png) stages.

The biggest concern is whether these discrepancies are outside of nominal for the GDS FIR pipeline, which would mean that we are introducing additional errors in the GDS pipeline. Could it be an issue in our model of DARM somewhere along the way? Or a mismatch between CAL-CS and the model?
Images attached to this report
Non-image files attached to this report
H1 PSL (ISC, SQZ)
jason.oberling@LIGO.ORG - posted 11:31, Wednesday 03 July 2024 (78839)
Re-align ALS/SQZ Fiber Pickoff on PSL Table

J. Oberling, J. Driggers

As a result of yesterday's PMC work several beams downstream of the PMC were misaligned; one of these was the beam into the fiber pickoff for ALS and SQZ.  I went in this morning to tweak this alignment so there was sufficient light available for both the ALS PLL loops and SQZ.  In the past this has been a very quick tweak to the steering mirror that directs the pickoff beam into the fiber, but not today.  Adjusting the steering mirror only brought the fiber transmission signal from ~0.02 to ~0.035 (our past max has been around 0.09), and trending the external PD that monitors the light available to the fiber showed we had more light than before the PMC work.  Regardless, I used a power meter to check the amount of power incident on the fiber coupler and found it at 40 mW; when this path was first installed in 2019, and upon recovery after the PSL laser upgrade in 2021, we had 50 mW in this pickoff path, so I adjusted ALS-HWP2 to bring the power back to 50 mW (interesting that the external PD thought we had more light when we had less?).  Ultimately, I had to start adjusting the fiber coupler alignment as well as the steering mirror.  The coupler has 5 degrees of freedom to adjust: horizontal, vertical, longitudinal, pitch, and yaw; these adjustments move the fiber coupler's internal coupling lens (longitudinal adjustment is performed by moving the 3 screws that control pitch and yaw all in the same direction by the same amount).  Horizontal and vertical adjustment did nothing but quickly drive the beam off of the fiber; I accidentally did this when checking the horizontal alignment and had a very hard time finding it again, calling Jenne in for assistance (Thank You!!).  What ultimately worked was adjusting the lens away from the fiber (so towards the steering mirror) by a little bit (roughly 1/8 turn of the allen key; this always resulted in a loss of pickoff signal), tweaking the steering mirror to peak the signal (rarely got back to the starting value), then carefully tweaking each of the three pitch/yaw adjustments on the coupler to peak the signal again (it's at this point the signal would increase past is previous max).  At first the going was very slow, fighting for every 0.001 to 0.002 increase, but increases became larger around the middle of the 0.06 - 0.07 range.  In the end we stopped when the fiber transmission signal was reading ~0.101, which the highest it's been in over a year.

Why did we have to move the coupling lens?  The immediate answer is, "Because the beam size changed."  But why did the beam size change?  A couple of WAGs: The beam size out of this new PMC is different than the old, or the slight alignment change we saw post-PMC has the beam passing through the 2 lenses between the ALS/ISCT1 pickoff and the fiber pickoff (lenses ALS-L1 and ALS-L3; the fiber pickoff is a pickoff from the ALS/ISCT1 path on the IO side of the PSL table) in such a way that the beam size changed slightly (yeah, this one is pretty big WAG...).  Regardless, from what I saw this morning it's possible that the beam size output from the new PMC is slightly different than the old.

LHO VE
david.barker@LIGO.ORG - posted 11:05, Wednesday 03 July 2024 (78837)
Wed CP1 Fill

Wed Jul 03 10:13:04 2024 INFO: Fill completed in 13min 1secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 ISC
camilla.compton@LIGO.ORG - posted 09:24, Wednesday 03 July 2024 - last comment - 17:25, Wednesday 03 July 2024(78831)
IM3 adjusted 450urad in Yaw to center beam on IM4 trans.

Sheila, Tony, Jenne, Camilla. Details in 78828.

Comments related to this report
camilla.compton@LIGO.ORG - 17:25, Wednesday 03 July 2024 (78847)

Ryan, Jenne, Camilla

We tried aligning ASC-POP_A with IM4 adn then ASC-X_TR_A with PR2 but couldn't get the X arm locked.

We then reverted IM3, IM4 and PR2 so that the x-arm could lock.

Ryan slowly moved IM3 back to this position and the servos followed along! 

H1 CDS (SUS)
erik.vonreis@LIGO.ORG - posted 08:44, Wednesday 03 July 2024 - last comment - 11:16, Wednesday 03 July 2024(78827)
h1sush2b had timing glitch caused by h1sush2a DAC replacement

At 14:44 UTC h1sush2b suffered a timing error and had to be restarted. 

The timing error happened when work was being done in the same rack in the CER, see https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=78823.

The timing error was a "long cycle" of 50 milliseconds.

 

 

Comments related to this report
david.barker@LIGO.ORG - 11:16, Wednesday 03 July 2024 (78838)

Restart log for this morning's work:

Wed03Jul2024
LOC TIME HOSTNAME     MODEL/REBOOT
05:30:57 h1sush2a     ***REBOOT*** 
05:33:09 h1sush2a     h1iopsush2a 
06:45:01 h1sush2a     ***REBOOT***
06:47:12 h1sush2a     h1iopsush2a
07:18:01 h1sush2a     ***REBOOT***
07:20:13 h1sush2a     h1iopsush2a
07:35:51 h1sush2a     ***REBOOT***
07:38:04 h1sush2a     h1iopsush2a
07:38:17 h1sush2a     h1susmc1  
07:38:30 h1sush2a     h1susmc3 
07:38:43 h1sush2a     h1susprm 
07:38:56 h1sush2a     h1suspr3
07:59:25 h1sush2b     ***REBOOT***
08:01:07 h1sush2b     h1iopsush2b
08:01:20 h1sush2b     h1susim   
08:01:33 h1sush2b     h1sushtts
 

H1 CAL
thomas.shaffer@LIGO.ORG - posted 14:34, Saturday 29 June 2024 - last comment - 18:27, Wednesday 03 July 2024(78746)
Calibration Sweep 2106 UTC

Calibration sweep taken today at 2106UTC in coordination with LLO and Virgo. This was delayed today since we were'nt thermalized at 1130PT.

Simulines start:

PDT: 2024-06-29 14:11:45.566107 PDT
UTC: 2024-06-29 21:11:45.566107 UTC
GPS: 1403730723.566107

End:

PDT: 2024-06-29 14:33:08.154689 PDT
UTC: 2024-06-29 21:33:08.154689 UTC
GPS: 1403732006.154689
 

I ran into the error below when I first started the simulines script, but it seemed to move on. I'm not sure if this frequently pops up and this is the first time I caught it.

Traceback (most recent call last):
 File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 314, in _
bootstrap
   self.run()
 File "/var/opt/conda/base/envs/cds/lib/python3.10/multiprocessing/process.py", line 108, in r
un
   self._target(*self._args, **self._kwargs)
 File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 427, in generateSignalInje
ction
   SignalInjection(tempObj, [frequency, Amp])
 File "/ligo/groups/cal/src/simulines/simulines/simuLines.py", line 484, in SignalInjection
   drive.start(ramptime=rampUp) #this is blocking, and starts on a GPS second.
 File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 122, in start
   self._get_slot()
 File "/var/opt/conda/base/envs/cds/lib/python3.10/site-packages/awg.py", line 106, in _get_sl
ot
   raise AWGError("can't set channel for " + self.chan)
awg.AWGError: can't set channel for H1:SUS-ETMX_L1_CAL_EXC
 

Images attached to this report
Comments related to this report
david.barker@LIGO.ORG - 09:48, Sunday 30 June 2024 (78757)

One item to note is that h1susex is running a different version of awgtpman since last Tuesday.

erik.vonreis@LIGO.ORG - 10:01, Monday 01 July 2024 (78777)

This almost certainly failed to start the excitation.

I tested a 0-amplitude excitation on the same channel using awggui with no issue.

There may be something wrong the environment the script is running in.

 

 

louis.dartez@LIGO.ORG - 13:48, Monday 01 July 2024 (78784)
We haven't made any changes to the environment that is used to run simulines. The only thing that seems to have changed is that a different version of awgtpman is running now on h1susex as Dave pointed out. 

Having said that, this failure has been seen before but rarely reappears when re-running simulines. So maybe this is not that big of an issue...unless it happens again.
louis.dartez@LIGO.ORG - 15:17, Wednesday 03 July 2024 (78842)
turns out i was wrong about the environment not changing. according to step 7 of the ops calib measurement instructions, simulines has been getting run in the base cds environment...which the calibration group does not control. That's probably worth changing. In the meantime, I'm unsure if that's the cause of last week's issues.
erik.vonreis@LIGO.ORG - 16:51, Wednesday 03 July 2024 (78846)

The CDS environment was stable between June 22 (last good run) and Jun 29.

 

There may have been another failure on June 27, which would make two failures and no successes since the upgrade.

 

The attached graph for June 27 shows an excitation at EY, but no associated excitation at EX during the same period.  Compare with the graph from June 22.

Images attached to this comment
erik.vonreis@LIGO.ORG - 18:27, Wednesday 03 July 2024 (78851)

On Jun 27 and Jun 28, H1:SUS-ETMX_L2_CAL_EXCMON was excited during the test.

Displaying reports 681-700 of 77247.Go to page Start 31 32 33 34 35 36 37 38 39 End