Displaying reports 13001-13020 of 84113.Go to page Start 647 648 649 650 651 652 653 654 655 End
Reports until 14:07, Tuesday 03 October 2023
H1 General (CAL, VE)
oli.patane@LIGO.ORG - posted 14:07, Tuesday 03 October 2023 - last comment - 08:28, Wednesday 04 October 2023(73244)
Post Tues Maintenance LVEA Sweep

Ryan S and I did a sweep of the LVEA. Everything looked good, but there were a few things we thought were probably okay but still wanted to log:

attachment1 - Two power supply units were on at the y-arm termination slab.

Not pictured - Small puddle of condensation water on the tile close to y-arm termination slab from the pipes above. This water is separate from the puddles that collect from either side of the beam tube. - tagging VE

attachment2, attachment5 - OM2 power supply and multimeter connected to Beckoff Output Interface 1 via mini hook clips

attachment3 - Computer for PCal X plugged in. I remember talking to TJ and Tony about this last time I swept and the conclusion (I believe) being that it had been plugged in for someone who had needed it at the time (not sure if still needed?) - tagging CAL

attachment4 - Power supply underneath Hartmann table on

 

Images attached to this report
Comments related to this report
richard.savage@LIGO.ORG - 16:34, Tuesday 03 October 2023 (73255)CAL

Olli, Ryan, thanks for noticing this computer by the Xend Pcal system.  While it happens to be located nearby, and we may have used it years ago, we (Pcal team) have not used that computer for some time and don't plan to use it in the future.  As far as I know it is currently in an inoperable state, having been replaced by the CDS laptops that we carry down to the end stations from the corner station.

camilla.compton@LIGO.ORG - 08:28, Wednesday 04 October 2023 (73263)

Thanks Oli. The power supply under the HWS table remains on as it powers the HWS cameras. Moved from in rack power in 2018: 44948 44855.

LHO VE (VE)
gerardo.moreno@LIGO.ORG - posted 13:54, Tuesday 03 October 2023 (73246)
FAMIS Task for Corner Station Turbo Pumps

(Jordan V., Gerardo M.)

Output mode cleaner tube turbo station;
     Scroll pump hours: 5561.4
     Turbo pump hours: 5621
     Crash bearing life is at 100%

X beam manifold turbo station;
     Scroll pump hours: 785
     Turbo pump hours: 783.5
     Crash bearing life is at 100%

Y beam manifold turbo station;
     Scroll pump hours: 1878.6
     Turbo pump hours: 603
     Crash bearing life is at 100%

FAMIS tasks 23523, 23595 and 23643.

H1 CDS
david.barker@LIGO.ORG - posted 13:25, Tuesday 03 October 2023 - last comment - 14:19, Tuesday 03 October 2023(73245)
CDS Maintenance Summary: Tuesday 3rd October 2023

WP11437 DAQ file server network jumbo frames

Jonathan:

During the DAQ restart Jonathan increased the MTU on the ethernet ports connecting FW and NDS to their NFS file servers to 9000 bytes. This change required an extended FW downtime, and a second restart of the NDS.

WP11447 Add missing model slow channels to DAQ

Dave:

Both H1EPICS_FEC.ini and H1EPICS_SDF.ini were extended to add missing channels. DAQ+EDC restart was needed.

WP11449 Add missing Dust Monitor channels to DAQ

Dave:

H1EPICS_DUST.ini  was extended to add missing channels. This file is now being generated by a python script. DAQ+EDC restart was needed.

WP11450 New camera server code

Patrick:

New server code was installed on h1digivideo3 for all cameras being served by this machine. This is a mitigation of sending NaN values to h1asc via cds_ca_copy.

WP11456 PSL Beckhoff SDF monitor Watchdog Channels

Ryan S, Jason, Dave:

New monitor.req and safe.snap files were installed on h1pslopcsdf.

DAQ Restart

Jonathan, Erik, Dave:

DAQ was restarted for the above changes. As noted, FWs were down for an extended time and NDSs needed a second restart to implement jumbo frames.

GDS1 needed a second restart to sync its channel list.

FW1 spontaneously restarted itself 20 minutes later.

 

Comments related to this report
david.barker@LIGO.ORG - 13:29, Tuesday 03 October 2023 (73247)

Tue03Oct2023
LOC TIME HOSTNAME     MODEL/REBOOT
11:30:51 h1daqdc0     [DAQ]  <<< 0-leg restart
11:31:05 h1daqfw0     [DAQ]
11:31:06 h1daqnds0    [DAQ]
11:31:06 h1daqtw0     [DAQ]
11:31:14 h1daqgds0    [DAQ]
11:36:24 h1daqfw0     [DAQ] <<< FW0 2nd restart
11:39:49 h1daqnds0    [DAQ] <<< NDS 2nd restart


11:42:08 h1susauxb123 h1edc[DAQ] <<< EDC restart


11:44:40 h1daqdc1     [DAQ] <<< 1-leg restart
11:44:48 h1daqfw1     [DAQ]
11:44:49 h1daqtw1     [DAQ]
11:44:50 h1daqnds1    [DAQ]
11:44:58 h1daqgds1    [DAQ]
11:47:04 h1daqfw1     [DAQ] <<< FW1 2nd restart
11:47:50 h1daqgds1    [DAQ] <<< GDS1 2nd restart (chan list)
11:49:05 h1daqnds1    [DAQ] <<< NDS1 2nd restart


12:17:28 h1daqfw1     [DAQ] <<< FW1 spontaneous restart
 

david.barker@LIGO.ORG - 14:19, Tuesday 03 October 2023 (73248)

DAQ Frame File Channel Changes

Fast channels: no additions nor removals

Slow channels:

no channels removed

3,319 channels added.

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 13:15, Tuesday 03 October 2023 (73243)
OPS Day Midshift Update

Tuesday Maintenance activities have concluded

IFO is starting INITIAL ALIGNMENT, then acquiring lock (NOMINAL LOW NOISE) and moving into OBSERVING thereafter.

H1 SQZ
camilla.compton@LIGO.ORG - posted 13:00, Tuesday 03 October 2023 - last comment - 15:18, Tuesday 03 October 2023(73241)
Spare S2000862 Homodyne Installed on SQZT7

Vicky and Sheila showed the Quantum efficiency on PD-B of our homodyne was low in 72604 and 72973.

Today Sheila and I swapped the original S1800623 with the spare S2000862 homodyne and balanced the homodyne by adjusting the BS and PDB steering mirror to bring H1:SQZ-HD_DIFF_DC_OUTPUT to 0. Both PDs now have the same QE of >97%.

We removed the fudge factors of -1.09 and 1.11 in H1:SQZ-HD_{A,B}_DC so now these show mA. The dark noise and shot noise look similar to the old homodyne, see attached. Next week will align and match the SQZ beam to the LO. We left the original HD in SQZT7.

  Channel HD_{A,B}_DC Ophir PM (s/n 756764, meter 751044) QE w/Ophir (from 72604) Thorlabs PM QE w/Thorlabs (from 72604)
PD-A 0.486mA 0.58mW 97.7% 0.560mW 101.2%
PD-B 0.486mA 0.58mW 97.7% 0.563mW 100.7%

* Example QE calulation: Responsivity of 0.833A/W for Ophir Measurements using 0.486mA / 0.58mW. QE is 97.7% from 0.833A/W / 0.8582A/W for QE(100%) from 72604.

Images attached to this report
Comments related to this report
naoki.aritomi@LIGO.ORG - 15:18, Tuesday 03 October 2023 (73250)

I accepted the SDF of the fudge factors of homodyne to go to observe.

Images attached to this comment
H1 CDS (SEI)
filiberto.clara@LIGO.ORG - posted 12:34, Tuesday 03 October 2023 (73240)
Troubleshooting ITMX ISI Coil Driver

WP 11448

The plan for today was to replace the third ISI Coil Driver on ITMX. Spare unit powered on with fault lights on. Unit was pulled and needs to be verified all ECRs on unit have been completed. Original ISI coil driver was reinstalled.

H1 DAQ
jonathan.hanks@LIGO.ORG - posted 11:58, Tuesday 03 October 2023 (73237)
WP 11437 update MTU on the h1daqframes machines
While looking at slowdowns on frame writing we noticed that the frame file server NICs had the default MTU of 1500.  Today during the daq restart we updated the MTU to 9000 to allow larger packets between the framewriter, frame file server, and the nds machines.

Basic process:

- on daqd
 * stop the daqd
 * umount /frames
 * take the interface down on the daqd

- on daqframes
 * take the interface down
 * update the mtu
 * bring the interface up

- back on the daqd
 * bring the interface up
 * verify that both sides are at a MTU of 9000
 * mount /frames
 * start daqd

This was done on h1daqfw0, h1daqfw1, h1daqnds0, h1daqnds1, h1daqframes-0, h1daqframes-1.
H1 CDS (PSL, TCS)
filiberto.clara@LIGO.ORG - posted 10:42, Tuesday 03 October 2023 - last comment - 17:28, Tuesday 03 October 2023(73233)
Grounding of EtherCAT Electronis in CER

WP 11455
WP 11457

The remaining two slow controls chassis were grounded via a wire braid to a grounding terminal block. Same as what was done in alog 68096, alog 66402 and alog 66469. This scheme provides a low resistance path to the grounding block. The anodized racks prevent a solid grounding connection via the mounting screws. The PSL and TCS slow controls were grounded using new scheme. See attached pictures. This completes all slow controls chassis in CER and End Stations.

Images attached to this report
Comments related to this report
jeffrey.kissel@LIGO.ORG - 11:06, Tuesday 03 October 2023 (73234)DetChar
Tagging DetChar -- with the CW group in mind. 
After this maintenance day, there might be a change in comb behavior as a result of this work.

This is action taken as a result of Keita, Daniel, and Ansel's work on ID'ing combs from the OM2 heater system -- see LHO:72967.
keita.kawabe@LIGO.ORG - 17:28, Tuesday 03 October 2023 (73257)AWC, CDS, DetChar-Request, ISC

After Fil was done with the grounding work, I temporarily restored the connection between the beckhoff cable and the heater chassis and used a normal breakout board to measure the voltage between the driver ground (pin13) and the positive drive voltage (pin 6) of D2000212, just like I did on Aug 09 2023 (alog 72061).

1st attachment is today, 2nd attachment is on Aug 09. I see no improvement (OK, it's better by ~1dB today).

After seeing this, I swapped the breakout board back to the switchable one I've been using to connect only a subset of pins (e.g. only thermistor 1). This time, there's no electrical connection between any pins but the cable was physically attached to the breakout board. No connection between the cable shell and the chassis connector shell either. I expect that the comb will be gone, but I'd like detchar to have a look.

The heater driver is driven by the voltag reference on the nearby table, not Beckhoff.

Images attached to this comment
H1 TCS
ryan.crouch@LIGO.ORG - posted 10:34, Tuesday 03 October 2023 (73231)
TCS Bi-Weekly chiller topoff FAMIS 26158

Closes FAMIS 26158

There was no water in the cup in the corner.

  TCS X TCS Y
Previous Level 29.6 9.5
New Level 30.0 10.0
Water added 150mL 150mL
H1 CDS
patrick.thomas@LIGO.ORG - posted 10:26, Tuesday 03 October 2023 (73228)
Updated pylon-camera-server on h1digivideo3 from 0.1.11 to 0.1.13.
Closes WP 11450.

Ran 'apt-get install pylon-camera-server' as root on h1digivideo3. The code for each camera was restarted by the service manager. No issues were seen. This update is intended to fix a possible threading issue that may have been a source of 'NaN' input. See this commit for the relevant change.
H1 AOS
jason.oberling@LIGO.ORG - posted 09:12, Tuesday 03 October 2023 - last comment - 08:36, Thursday 05 October 2023(73226)
ITMx Optical Lever Laser Replaced (WP 11454)

J. Oberling, F. Mera

This morning we swapped the failing laser in the ITMx OpLev with a spare.  The first attached picture shows the OpLev signals before the laser swap, the 2nd is after.  As can be seen there was no change in alignment, but the SUM counts are now back around 7000.  I'll keep an eye on this new laser over the next couple of days.

This completes WP 11454.

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 13:09, Tuesday 03 October 2023 (73242)

J. Oberling, R. Short

Checking on the laser after a few hours of warm up, I found the cooler to be very warm, and the box housing the DC-DC converter that powers the laser (steps ~11 VDC down to 5 VDC) was extremely warm.  Also, the SUM counts had dropped from the ~7k we started at to ~1.1k.  Seeing as how we just installed a new laser, my suspicion was that the DC-DC converter was failing.  Checking the OpLev power supply in the CER it was providing 3A to the LVEA OpLev lasers; this should only be just over 1A, which is further indication something is up.  Ryan and I replaced the DC-DC converter with a spare.  Upon powering up with the new converter the current delivered by the power supply was still ~3A, so we swapped the laser with another spare.  With the new laser the delivered current was down to just over 1A, as it should be.  The laser power was set so the SUM counts are still at ~7k, and we will keep an eye on this OpLev over the coming hours/days.  Both lasers SN 191-1 and SN 119-2 will be tested in the lab; my suspicion is that the dying DC-DC converter damaged both lasers and they will have to be repaired by the vendor, will see what the lab testing says.  New laser SN is 199-1.

austin.jennings@LIGO.ORG - 23:48, Tuesday 03 October 2023 (73260)

Noticing as the night progresses, the sum counts are slowly going up, starting from ~6200 and now ~7100. Odd.

Images attached to this comment
ryan.crouch@LIGO.ORG - 08:24, Wednesday 04 October 2023 (73262)

ITMX OPLEV sum counts are at about 7500 this morning.

Images attached to this comment
ryan.crouch@LIGO.ORG - 08:36, Thursday 05 October 2023 (73282)

Sum counts around 7700 this morning, they're still creeping up

Images attached to this comment
H1 CDS
david.barker@LIGO.ORG - posted 08:32, Tuesday 03 October 2023 - last comment - 10:37, Tuesday 03 October 2023(73225)
Weather stations restart

The outbuilding weather station IOCs stopped updating again at 04:02 this morning (they stopped at 04:27 Mon morning).

As part of the investigation I found that I am still running a 04:01 cronjob which restarts any stuck weather stations. This should only restart an IOC if its weather station PVs are in an invalid state. The cronjob was installed on 17th Jan 2023 following a spate of lock ups which were occuring between 03:18 - 03:28.

I have disabled this crontab for now.

Comments related to this report
david.barker@LIGO.ORG - 10:37, Tuesday 03 October 2023 (73232)

Instead of disabling the crontab, we have decided to reschedule it to run at 05:01 each morning:

01 05 * * * /ligo/home/controls/restart_weather > /dev/null
 

LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 08:07, Tuesday 03 October 2023 - last comment - 12:08, Tuesday 03 October 2023(73224)
OPS Day Shift Start

TITLE: 10/03 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventative Maintenance
OUTGOING OPERATOR: Ryan S
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 4mph Gusts, 4mph 5min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.25 μm/s
QUICK SUMMARY:

IFO is DOWN for Planned Tuesday Maintenance

When I arrived, IFO was about halfway through locking.

Was locked for only 3 minutes at NLN and unlocked due to a DCPD Saturation at 14:41 UTC

Dust monitors were sounding and were acknowledged.

 

Comments related to this report
ryan.crouch@LIGO.ORG - 12:08, Tuesday 03 October 2023 (73238)

For the 14:41 lockloss, the signals look fine on the lockloss select tool scopes. The violins also look fine, they were all trending downwards. The OMC_DCPD signals looked interesting, they seem to have been diverging before the lockloss.

Images attached to this comment
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 22:42, Friday 22 September 2023 - last comment - 12:22, Tuesday 03 October 2023(73064)
Lockloss

Lockloss @ 09/23 05:39UTC

Comments related to this report
oli.patane@LIGO.ORG - 01:03, Saturday 23 September 2023 (73066)

Right after lockloss DIAG_MAIN was showing:

- PSL_ISS: Diffracted power is low

- OPLEV_SUMS: ETMX sums low

 

05:41UTC LOCKING_ARMS_GREEN, the detector couldn't see ALSY at all and I noticed the ETM/ITMs L2 saturations(attachment1-L2 ITMX MASTER_OUT as example) on the saturation monitor(attachment2) and noticed the ETMY oplev moving around wildly(attachment3)

05:54 I took the detector to DOWN, and immediately could see ALSY on the cameras and ndscope; L2 saturations were all still there

05:56 and 06:05 I went to LOCKING_ARMS_GREEN again and GREEN_ARMS_MANUAL, respectively, but wasn't able to lock anything - I was able to go to LOCKING_ARMS_GREEN and GREEN_ARMS_MANUAL without the saturations getting too bad but both ALSX and Y evenually locked for a few seconds each and then unlocked and gone to basically 0 on the cameras and ndscope(attachment4). ALSX and Y sometime after this went to FAULT and were giving the messages "PDH" and "ReflPD A"

06:07 Tried going to INITIAL_ALIGNMENT but Verbal kept calling out all of the optics as saturating, ALIGN_IFO wouldn't move past SET_SUS_FOR_ALS_FPMI, and the oplev overview screen showed ETMY, ITMX, and ETMX moving all over the place (I was worried something would go really wrong if I left it in that configuration)

I tried waiting a bit and then tried INITIAL_ALIGNMENT and LOCKING_ARMS_GREEN again but had the same results.

Naoki came on and thought the saturations might be due to the camera servos, so he cleared the camera servo history and all of the saturations disappeared (attachment5) and an INITIAL_ALIGNMENT is now running fine.

Not sure what caused the saturations in the camera servos, but it'll be worth looking into whether the saturations are what ended up causing this lockloss.

Images attached to this comment
naoki.aritomi@LIGO.ORG - 13:50, Monday 25 September 2023 (73091)

As shown in the attached figure, the output of camera servo for PIT suddenly became 1e20 and the lockloss happened after 0.3s. So this lockloss is likely due to the camera servo. I checked the input signal of camera servo. The bottom left is the input of PIT1 and should be same as the BS camera output (bottom right). The BS camera output itself looks OK, but the input of PIT1 became red dashed and the output became 1e20. The situation is same for PIT2, 3. I am not sure why this happened.

Images attached to this comment
david.barker@LIGO.ORG - 13:41, Tuesday 26 September 2023 (73119)

Opened FRS29216 . It looks like a transient NaN on ETMY_Y camera centroid channel caused a latching of 1e20 on the ASC CAM_PIT filter modules outputs.

oli.patane@LIGO.ORG - 11:40, Wednesday 27 September 2023 (73135)

Camilla, Oli

We trended the ASC-CAM_PIT{1,2,3}_INMON/OUTPUT channels alongside ASC-AS_A_DC_NSUM_OUT_DQ(attachment1) and found that according to ASC-AS_A_DC_NSUM_OUT_DQ, the lockloss started(cursor 1) ~0.37s before PIT{1,2,3}_INMON read NaN(cursor 2), so the lockloss was not caused by the camera servos.

It is possible that the NaN values are linked to the light dropping off of the PIT2 and 3 cameras right after the lockloss, as when the cameras come back online ~0.2s later, both the PIT2 and PIT3 cameras are at 0, and looking back over several locklosses it looks like these two cameras tend to drop to 0 between 0.35 and 0.55s after the lockloss starts. However the PIT1 camera is still registering light for another 0.8s after coming back online(typical time for this camera).

 

Images attached to this comment
naoki.aritomi@LIGO.ORG - 12:22, Tuesday 03 October 2023 (73239)

Patrick updated the camera server to solve the issue in alog73228.

H1 ISC (AWC, DetChar, ISC)
keita.kawabe@LIGO.ORG - posted 12:01, Tuesday 19 September 2023 - last comment - 12:01, Tuesday 03 October 2023(72967)
OM2 thermistors are connected back to Beckhoff (but not the heater driver voltage input, for which we're still using the voltage reference) (Daniel, Keita)

Detchar, please tell us if the 1.66Hz comb is back.

We changed the OM2 heater driver configuration from what was described in alog 72061.

We used a breakout board with jumpers to connect all OM2 thermistor readback pins (pin 9, 10, 11, 12, 22, 23, 24, 25) to Beckhoff at the back of the driver chassis. Nothing else (even the DB25 shell on the chassis) is connected to Beckhoff.

Heater voltage inputs (pin 6 for positive and 19 for negative) are connected to the portable voltage reference powered by a DC power supply to provide 7.15V.

BTW, somebody powered the OM2 heater off at some point in time, i.e. OM2 has been cold for some time but we don't know exactly how long.

When we went to the rack, half of the power supply terminal (which we normally use for 9V batteries) was disconnected (1st picture), and there was no power to the heater. Baffling. FYI, if it's not clear, the power terminal should look like in the second picture.

Somebody should have snagged cables hard enough, and didn't even bother to check.

Next time you do it, since reconnecting is NOT good enough, read alog 72286 to learn how to set the voltage reference to 7.15V and turn off the auto-turn-off function, then do it, and tell me you did it. I promise I will thank you.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 12:07, Tuesday 19 September 2023 (72969)AWC, ISC

Thermistors are working.

Images attached to this comment
daniel.sigg@LIGO.ORG - 12:22, Tuesday 19 September 2023 (72970)

There is an OSEM pitch shift of OM2 at the end of the maintenance period 3 weeks ago (Aug 29)

Images attached to this comment
jenne.driggers@LIGO.ORG - 14:35, Tuesday 19 September 2023 (72979)CAL

Having turned the heater back on will likely affect our calibration.  It's not a bad thing, but it is something to be aware of.

ryan.short@LIGO.ORG - 14:47, Tuesday 19 September 2023 (72980)CAL

Indeed it now seems that there is a ~5Mpc difference in the range calculations between the front-ends (SNSW) and GDS (SNSC) compared to our last observation time.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 10:22, Wednesday 20 September 2023 (73000)

It looks like this has brought back the 1.66 Hz comb. Attached is an averaged spectrum for 6 hours of recent data (Sept 20 UTC 0:00 to 6:00); the comb is the peaked structure marked with yellow triangles around 280 Hz. (You can also see some peaks in the production Fscans from the previous day, but it's clearer here.)

Images attached to this comment
keita.kawabe@LIGO.ORG - 13:12, Wednesday 20 September 2023 (73006)

To see if one of the Beckhoff terminals for thermistors is kaput, I disconnected thermistor 2 (pins 9, 11, 22 and 24) from Beckhoff at the back of the heater driver chassis.
For a short while the Beckhoff cable itself was disconnected but the cable was connected back to the breakout board at the back of the driver chassis by 20:05:00 UTC.

Thermistor 1 is still connected. Heater driver input is still receiving voltage from the voltage reference.

 

ansel.neunzert@LIGO.ORG - 09:53, Thursday 21 September 2023 (73028)

I checked a 3-hour span starting at 04:00 UTC today (Sept 21) and found something unusual. There is a similar structure peaked near 280 Hz, but the frequency spacing is different. These peaks lie on integer multiples of 1.1086 Hz, not 1.6611 Hz. Plot attached.

Images attached to this comment
keita.kawabe@LIGO.ORG - 14:32, Thursday 21 September 2023 (73037)DetChar-Request

Detchar, please see if there's any change in 1.66Hz comb.

At around 21:25 UTC, I disconnected OM2 thermistor 1 (pins 10, 12, 23, 25 of the cable at the back of the driver chassis) from Beckhoff and connected thermistor 2 (pins 9, 11, 22, 24).

Images attached to this comment
ansel.neunzert@LIGO.ORG - 11:19, Friday 22 September 2023 (73054)

Checked 6 hours of data starting at 04:00 UTC Sept 22. The comb structure persists with spacing 1.1086 Hz.

jeffrey.kissel@LIGO.ORG - 11:07, Tuesday 03 October 2023 (73235)DetChar
Electrical grounding of the beckhoff systems has been modified as a result of this investigation -- see LHO:73233.
jeffrey.kissel@LIGO.ORG - 12:01, Tuesday 03 October 2023 (73236)CAL
Corroborating Daniel's statement that the OM2 heater power supply was disrupted on Tuesday Aug 29th 2023 (LHO:72970), I've zoomed in on the pitch *OSEM* signals for both 
    (a) when the suspected time of power distruption (first attachment), and 
    (b) when the power and function of OM2 was restored (second attachment).

One can see that upon power restoration and resuming the HOT configuration of TSAMS on 2023-09-19 (time (a) above), OM2 pitch *decreases* by 190 [urad] over the course of ~1 hour, with a characteristic "thermal time constant" exponential shape to the displacement evolution.

Then, heading back to 2023-08-29, we can see a similar shaped event that causes OM2 pitch to *increase* by 160 [urad] over the course of ~40 minutes (time (b) above). Then at 40 minutes IFO recovery from maintenance begins, and we see the OM2 pitch *sliders* adjusted to account for the new alignment, as had been done several times before with the OM2 ON vs. OFF state changes.

I take this to be consistent with:
    The OM2 TSAMS heater was inadvertently turned OFF and COLD on 2023-08-29 at 18:42 UTC (11:42 PDT), and 
    The OM2 TSAMS heater was restored turned ON and HOT on 2023-09-19 18:14 UTC (11:14 PDT).
Images attached to this comment
Displaying reports 13001-13020 of 84113.Go to page Start 647 648 649 650 651 652 653 654 655 End