Displaying reports 13721-13740 of 86305.Go to page Start 683 684 685 686 687 688 689 690 691 End
Reports until 16:11, Tuesday 12 December 2023
H1 General
anthony.sanchez@LIGO.ORG - posted 16:11, Tuesday 12 December 2023 (74768)
Tuesday EVE Shift Start

TITLE: 12/13 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.33 μm/s
QUICK SUMMARY:
H1 Is locked at NOMINAL_LOW_NOISE and OBSERVING for the past hour.
Violins lookin great!

 

LHO General
ryan.short@LIGO.ORG - posted 16:04, Tuesday 12 December 2023 (74765)
Ops Day Shift Summary

TITLE: 12/12 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
INCOMING OPERATOR: Tony
SHIFT SUMMARY: Relatively light maintenance day followed by some trouble getting locked, but after adjusting thresholds in ALS Guardians, was able to lock without much issue.

LOG:

Start Time System Name Location Lazer_Haz Task Time End
16:06 FAC Karen EY - Technical cleaning 17:16
16:06 FAC Kim EX - Technical cleaning 17:29
16:07 FAC Randy EY -   17:05
16:08 VAC Janos, Jordan MX, EX - Turbopump testing 19:35
16:09 TCS Camilla LVEA YES Turning off HWS power supplies 16:17
16:24 VAC Gerardo CS, FCES, EX YES Purge air system testing 20:20
16:35 VAC Travis LVEA YES Turbopump testing 19:50
16:36 FAC Chris +1 LVEA, Outbuildings YES Fire extinguisher inspections 17:50
16:51 CDS Fil EX - PEM cable termination 18:20
17:05 FAC Randy LVEA YES Moving scissor lift to HAM3 17:55
17:08 AOS Betsy, Ali LVEA YES Walkabout 18:25
17:08 TCS Camilla Optics Lab LOCAL CO2 profiler testing 18:25
17:16 FAC Karen, Kim FCES - Technical cleaning 18:10
17:30 FAC Tyler LVEA YES 3IFO 17:41
17:40 TCS TJ Optics Lab LOCAL CO2 profiler testing 18:25
17:52 SQZ Daniel, Nutsinee LVEA YES   19:10
18:13 SEI Jim LVEA YES Move CPS rack (HAM7 ISI down) 18:24
18:21 CDS Fil LVEA YES BSC2 temp sensor 20:15
18:25 SEI Jim CR - HAM6 transfer functions 19:11
18:28 TCS TJ, Camilla, Ali LVEA YES CO2X beam profiling, HWS table checks 20:15
18:45 FAC Karen, Kim LVEA YES Technical cleaning 20:00
19:18 SEI Jim CR - ITMY transfer functions 20:00
21:34 TCS Camilla LVEA - Turning on HWS cameras 21:41
21:45 FAC Richard LVEA - Moving scissor lift 21:50
22:16 VAC Gerardo, Fil MX - Compressor troubleshooting Ongoing
H1 ISC (OpsInfo)
thomas.shaffer@LIGO.ORG - posted 15:34, Tuesday 12 December 2023 (74766)
ALS green transmission threshold lowered

Ryan S, Jenne D, Sheila

The ALS_COMM and DIFF nodes will check on the green arm transmission when finding IR to ensure that the arms are aligned well enough to move on. This threshold has been set to 0.75 but I lowered it to 0.65 since we've ran into this threshold as few times in the last week and have been unable to improve the arm transmission.

Last week this happened on Thursday (alog74655) and I just manualed around the state to continue locking. It worked for that lock to get us back up, and then it wasn't a problem with future locks until today. Today, the arm transmission was low for both arms so we went to confirm that there was no PR3 movement, and then we restored optics back to the start of the previous lock. This also didn't help. Looking at the arm power over the last few weeks (attachment 1), after the PR3 and table work from last Tuesday (alog74618) it has been lower than before then but as low as we were seeing today. The current thinking is that the PR3 move last week brought our powers lower, and while the table work improved the COMM beatnote, it did nothing for the arm powers. Once the IFO gets to a better full lock alignment, this alignment tends to give us better ALS power next lock acquisition, so lowering this threshold should help us get to that point and hopefully help sequential locks.

The long term solution would be to go back on table to realign the rest of the COMM path to camera, then renormalize the arm powers so that 1 is actually our max power, then bring this threshold back to the 0.75 that it was*. Going on that table always carries rick of making things worse, so we definitely don't want to be doing this right before the holidays. We will reconvene in 2024 on this issue if it's still an issue.

*There's a comment in the guardian code that it was 0.85 from Aug 12, 2020.

Images attached to this report
LHO VE
jordan.vanosky@LIGO.ORG - posted 13:36, Tuesday 12 December 2023 (74763)
Functionality Test Performed on EX/MX Turbo Pumps

Jordan, Janos

We ran the functionality test on the main turbopumps in MX and EX during Tuesday Maintenance (12/12/23). The scroll pump is started to take pressure down to low 10^-02 Torr, at which time the turbo pump is started, the system reaches low 10^-08 Torr after a few minutes, then the turbo pump system is left ON for about 1 hour, after the hour the system goes through a shut down sequence.
 

MX Turbo:

Bearing Life:100%

Turbo Hours: 120

Scroll Pump Hours: 211

EX Turbo:

Bearing Life:100%

Turbo Hours: 277

Scroll Pump Hours: 6317

Closing WP 11573 and FAMIS 24846 FAMIS 24870

H1 TCS
camilla.compton@LIGO.ORG - posted 13:08, Tuesday 12 December 2023 (74760)
CO2X Beam Scan Started

TJ, Camilla. WP11574

We borrowed the Ophir PH00235 USB MSP-NS-Pyro 9/5 bean scanner from CIT. It has a PH00092 removable 7.5" FL lens on the front and is on a 500mm rail. WE checked the scanner and rail worked in the lab using our Nanoscan laptop and the Nanoscan v1 software. 

On CO2X (layout T1200007) we added three Gold coated steering mirrors just after the power control waveplate. Added a beam block panel on the back of the table. While aligning we kept ther CO2 power requested at 0.1W.  Started aligning the beam to the beamscan but didn't finish before the end of maintenance. 

We left the beanscan and the three mirrors on the table, we moved the mirror that was in the path ("A" on photo) out of the path. Plan to finish aligning and take data next Tuesday. The cables and driver boxes are in a tote labeled "nanoscan" in the TCS cabinet and the laptop is back in the optics lab cabinet.  Photos of table and beampath (nominal lighter red) attached. 

Images attached to this report
H1 General
thomas.shaffer@LIGO.ORG - posted 13:05, Tuesday 12 December 2023 (74761)
LVEA swept

Genie lift found plugged into outlet near TCSY table, unplugged.

Robert's shaker is still connected to near HAM2, it is plugged in but powered off.

All else looked good, followed T1500386

 

H1 General
ryan.short@LIGO.ORG - posted 12:10, Tuesday 12 December 2023 (74757)
Ops Day Mid Shift Report

All but a few maintenance activities have wrapped up to the point where I've turned sensor correction back on and H1 has started lock acquisition.

H1 CDS (DAQ)
david.barker@LIGO.ORG - posted 11:01, Tuesday 12 December 2023 - last comment - 13:17, Wednesday 13 December 2023(74755)
WP11568 TW0 raw minute trends offload

As the first part of the TW0 raw minute trend file offload, tw0 is now writing to a new area freeing up the old files for transfer. 

nds0 was restarted at 10:44 PST to serve the past 6 months of data from their temporary location as the files are being transferred to h1daqframes-0. The file copy takes about 30 hours.

Comments related to this report
david.barker@LIGO.ORG - 16:22, Tuesday 12 December 2023 (74769)

File copy was started 11:50 Tue. As of 16:05 43 of 256 dirs had been copied. ETA 13:10 Wed.

david.barker@LIGO.ORG - 13:17, Wednesday 13 December 2023 (74786)

Copy completed at 13:15:44 PST. I will do the nds0 change and file deletion tomorrow morning.

LHO VE
david.barker@LIGO.ORG - posted 10:13, Tuesday 12 December 2023 (74754)
Tue CP1 Fill

Tue Dec 12 10:08:21 2023 INFO: Fill completed in 8min 17secs

Gerardo confirmed a good fill curbside.

Images attached to this report
H1 TCS
camilla.compton@LIGO.ORG - posted 08:26, Tuesday 12 December 2023 - last comment - 13:12, Wednesday 13 December 2023(74750)
ITM HWS Camera's Powered off at 16:15UTC, search for cause of DARM Comb

This morning at 8:15am (16:15UTC) I turned off the external power supply that powered both ITM HWS CCD cameras (Dalsa 1M60), located on the floor under the HWS table.  We'll plan to check the magnetometer data to see if the 74738 comb is still present.

After we stopped HWS code and remotely turned off camera's in 74738, the DARM comb still remained. Dan, Daniel, Nutsinee expect this could be the framegrabber in the HWS computer still sending a 7Hz signal. We can troubleshoot this and how the camera's are grounded to the optics table later. 

Comments related to this report
ansel.neunzert@LIGO.ORG - 09:29, Tuesday 12 December 2023 (74752)

Looks like the comb is gone in the magnetometer channel. Figure 1 pre, figure 2 post.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 10:02, Tuesday 12 December 2023 (74753)

Minor update: appending an hour-long "post" spectrum just to confirm & provide better comparison with previous 1-hour plots.

Images attached to this comment
ansel.neunzert@LIGO.ORG - 11:20, Tuesday 12 December 2023 (74756)

Camilla says the camera was turned on but not initialized at 10:03 Pacific / 18:03 UTC. Took a 30m spectrum starting 18:05 utc, attached. There is a strong near-1Hz comb and a strong near-57 Hz comb.

Images attached to this comment
camilla.compton@LIGO.ORG - 12:44, Tuesday 12 December 2023 (74759)

At 20:40UTC (12:40PT) we powered off both ITM computers h1hwsmsr and h1hwsmsr1. Dave has been notified. In 30 minutes we can try power cycling the cameras to see if the computers are off whether they re-initalize causing a comb. 

ansel.neunzert@LIGO.ORG - 13:34, Tuesday 12 December 2023 (74762)

Combs are still there with ITM computers off. Looks pretty much identical to the previous spectrum (very tiny frequency shift in the near-1Hz comb, though).

Images attached to this comment
camilla.compton@LIGO.ORG - 13:44, Tuesday 12 December 2023 (74764)

At 21:30UTC Erik unplugged the fiber connections that run from the back of the computers to the HWS cameras. Computers still off.

At 21:37UTC I power cycled the external supply to the cameras, to test if the cameras still turn on with a comb with no computer/frame grabber connection.

camilla.compton@LIGO.ORG - 16:37, Tuesday 12 December 2023 (74770)

When the camera's restanted with the fiber link to the computers disconneted, we again saw the 1Hz and 57Hz combs, plot attached. It appears default for ITMX is 57Hz and ITMY is 1Hz. We should check if the camera software allows us to to put these to zero/ off.

For now we've replugged in the fiber connections, turned on the computers and restarted the hws code, both at 7Hz (this created some sdf diffs from the H1:TCS-ITMY_HWS_{}_POS_{X,Y} channels restarting with default values that kicked us out of observing, sorry)

Lasers still off, will turn lasers on tomorrow during commissioning to avoid sdf diffs.

Images attached to this comment
camilla.compton@LIGO.ORG - 13:12, Wednesday 13 December 2023 (74785)

Turned on both ITM SLEDs at 21:08UTC. These H1:TCS-ITM{X,Y}_HWS_SLEDSHUTDOWN channels should be added to sdf.

LHO General
ryan.short@LIGO.ORG - posted 08:11, Tuesday 12 December 2023 (74749)
Ops Day Shift Start

TITLE: 12/12 Day Shift: 16:00-00:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventative Maintenance
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: MAINTENANCE
    Wind: 7mph Gusts, 5mph 5min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.27 μm/s
QUICK SUMMARY:

H1 has just lost lock after 32 hours following a commissioning test. Maintenance activities have begun.

H1 CDS (CDS)
erik.vonreis@LIGO.ORG - posted 07:15, Tuesday 12 December 2023 (74748)
Workstations updated

Workstations were updated and rebooted.  These were OS updates.  Conda packages were not updated.

H1 OpsInfo (SUS)
ryan.short@LIGO.ORG - posted 14:32, Monday 11 December 2023 - last comment - 08:54, Tuesday 12 December 2023(74741)
Cancelled In-Lock Charge Measurements for 12/12

I've taken the SUS_CHARGE node to DOWN so that the automatic in-lock charge measurements don't run before tomorrow morning's Tuesday maintenance time (and I can confirm that doing this does not take H1 out of observing). To re-enable the automated in-lock charge measurements, the SUS_CHARGE node should be requested to INJECTIONS_COMPLETE before next Tuesday.

Comments related to this report
camilla.compton@LIGO.ORG - 08:54, Tuesday 12 December 2023 (74751)

We requested INJECTIONS_COMPLETE so this node is back in it's nominal configuration and will run injections next Tuesday at 7:45am.

H1 CAL
vladimir.bossilkov@LIGO.ORG - posted 08:29, Friday 28 July 2023 - last comment - 12:31, Tuesday 12 December 2023(71787)
H1 Systematic Uncertainty Patch due to misapplication of calibration model in GDS

First observed as a persistent mis-calibration in systematic error monitoring Pcal lines which measure PCAL / GDS-CALIB_STRAIN affecting both LLO and LHO, [LLO Link] [LHO Link], characterised by these measurements consistently disagreeing with the uncertainty envelope.
It us presently understood that this arises from bugs in the code producing the GDS FIR filters there exists a sizeable discrepancy, which Joseph Betzwieser is spear-heading a thorough investigation to correct,

I make a direct measurement of this systematic error by dividing CAL-DARM_ERR_DBL_DQ / GDS-CALIB_STRAIN , where the numerator is further corrected for kappa values of the sensing, cavity pole, and the 3 actuation stages (GDS does the same corrections internally). This gives a transfer function of the difference induced from errors in the GDS filters.

Attached in this aLog, and its sibling aLog in LLO, is this measurement in blue, the PCAL / GDS-CALIB_STRAIN measurement in orange, and the smoothed uncertainty correction vector in red. Attached also is a text file of this uncertainty correction for application in pyDARM to produce the final uncertainty, in the format of [Frequency, Real, Imaginary].

Images attached to this report
Non-image files attached to this report
Comments related to this report
ling.sun@LIGO.ORG - 15:33, Friday 28 July 2023 (71798)

After applying this error TF, the uncertainty budget seems to agree with monitoring results (attached).

Images attached to this comment
ling.sun@LIGO.ORG - 13:02, Thursday 17 August 2023 (72299)

After running the command documented in alog 70666, I've plotted the monitoring results on top of the manually corrected uncertainty estimate (see attached). They agree quite well.

The command is:

python ~cal/src/CalMonitor/bin/calunc_consistency_monitor --scald-config  ~cal/src/CalMonitor/config/scald_config.yml --cal-consistency-config  ~cal/src/CalMonitor/config/calunc_consistency_configs_H1.ini --start-time 1374612632 --end-time 1374616232 --uncertainty-file /home/ling.sun/public_html/calibration_uncertainty_H1_1374612632.txt --output-dir /home/ling.sun/public_html/

The uncertainty is estimated at 1374612632 (span 2 min around this time). The monitoring data are collected from 1374612632 to 1374616232 (span an hour).

 

Images attached to this comment
jeffrey.kissel@LIGO.ORG - 17:01, Wednesday 13 September 2023 (72871)
J. Kissel, J. Betzwieser

FYI: The time at which Vlad used to gather TDCFs to update the *modeled* response function at the reference time (R, in the numerator of the plots) is 
    2023-07-27 05:03:20 UTC
    2023-07-26 22:03:20 PDT
    GPS 1374469418

This is a time when the IFO was well thermalized.

The values used for the TDCFs at this time were
    \kappa_C  = 0.97764456
    f_CC      = 444.32712 Hz
    \kappa_U  = 1.0043616 
    \kappa_P  = 0.9995768
    \kappa_T  = 1.0401824

The *measured* response function (GDS/DARM_ERR, the denominator in the plots) is from data with the same start time, 2023-07-27 05:03:20 UTC, over a duration of 384 seconds (8 averages of 48 second FFTs).

Note these TDCF values list above are the CAL-CS computed TDCFs, not the GDS computed TDCFs. They're the value exactly at 2023-07-27 05:03:20 UTC, with no attempt to average further over the duration of the *measurement*. See attached .pdf which shows the previous 5 minutes and the next 20 minutes. From this you can see that GDS was computing essentially the same thing as CALCS -- except for \kappa_U, which we know
 - is bad during that time (LHO:72812), and
 - unimpactful w.r.t. the overall calibration.
So the fact that 
    :: the GDS calculation is frozen and
    :: the CALCS calculation is noisy, but is quite close to the frozen GDS value is coincidental, even though
    :: the ~25 minute mean of the CALCS is actually around ~0.98 rather than the instantaneous value of 1.019
is inconsequential to Vlad's conclusions.

Non-image files attached to this comment
louis.dartez@LIGO.ORG - 00:54, Tuesday 12 December 2023 (74747)
I'm adding the modeled correction due to the missing 3.2 kHz pole here as a text file. I plotted a comparison showing Vlad's fit (green), the modeled correction evaluated on the same frequency vector as Vlad (orange), and the modeled correction evaluated using a dense frequency spacing (blue), see eta_3p2khz_correction.png. The denser frequency spacing recovers error of about 2% between 400 Hz and 600 Hz. Otherwise, the coarsely evaluated modeled correction seems to do quite well. 
Images attached to this comment
Non-image files attached to this comment
ling.sun@LIGO.ORG - 12:31, Tuesday 12 December 2023 (74758)

The above error was fixed in the model at GPS time 1375488918 (Tue Aug 08 00:15:00 UTC 2023) (see LHO:72135)

Displaying reports 13721-13740 of 86305.Go to page Start 683 684 685 686 687 688 689 690 691 End