In order to see if a couple of optics for HWSY could be identified, the illuminator cover on the A1F4 viewport of HAM4 was removed. Conclusion The optic in the HWSY DCBS1 lens mount is most likely to be a dichroic beamsplitter rather than a lens. The orientation of the HWSY VacLens mount is such that the retaining ring is facing the HWS table. Clearly seen are the two holes of the retaining ring as Betsy illuminated the chamber from A1F4. The holes are not clearly visible when one looks from the A1F4 flange side, which is consistent with them being partly blocked by the lens mount. The 2" Siskiyou lens mount is 0.50" thick but has a 0.41" deep recess to accommodate an optic. The retaining ring for the mount is 0.13" thick. If the optic in the mount is the dichroic beamsplitter (D1200214) which is 0.375" thick. Since the stack up of the retaining ring and a 3/8" thick optic is thicker than the mount, the mount is a custom modification to the standard part. More importantly since the retaining ring appears to be almost flush with the surface of the lens mount, the optic held is more likely to be the dichroic beamsplitter rather than a lens. The data sheet for a PLCX-50.8-360.6 lens says it is 6.4 mm thick - a concave lens would be similar. If the optic in the mount were a lens, the retaining ring would sit deeper in the mount. Betsy / Peter
The modified Siskiyou lens mount dimensions state that it is made from a standard part. Something doesn't quite hang together though because it's not clear to me how one makes a 0.56" deep recess with a 0.50" thick part. Unless either 0.56" is really 0.46", or the part is not 0.50" thick as per the vendor drawing. /* update */ Eddie Sanchez provided me with this information. D1102166 is 0.65" thick, not 0.50". The up shot is that the optic concerned is still more likely to be the dichroic beamsplitter and not a lens.
Earlier this evening when I was checking doors/lights, I noticed that the PSL Crystal Chiller was below the mid-level. I topped it off with 275mL.
TITLE: 04/19 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Observing at 64Mpc
OUTGOING OPERATOR: Patrick
CURRENT ENVIRONMENT:
Wind: 8mph Gusts, 6mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY:
I'm in for Emergency Coverage (Sick Operator) For Owl Shift Tonight. Patrick handing over a nice H1 amongst a seismically quiet night.
TITLE: 04/19 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC STATE of H1: Observing at 65Mpc INCOMING OPERATOR: Ed (Corey covering) SHIFT SUMMARY: Some issues staying locked before DRMI, not certain of cause. I think the lock loss shortly after the first transition to observing occurred after the small spike in 0.03 - 0.1 Hz CS seismic BLRMS was already coming down. No issues since last lock ~6 hours ago. LOG: 23:18 UTC Clicked run on RF noise DTT 23:41 UTC Stopped at DC_READOUT_TRANSITION. Sheila commissioning. 23:45 UTC Restarted DMT range integrand 23:59 UTC Sheila done. Continuing to NLN. 00:11 UTC Observing. Small spike in CS 0.03 - 0.1 Hz seismic BLRMS < .1 um/s. High bounce/roll mode (~ 9 Hz) 00:13 UTC Lock loss. 01:01 UTC Observing. High bounce/roll mode (~9 Hz) 03:30 UTC LLO down. Out of observing to run a2l. 03:45 UTC Back to observing.
If I recall correctly this image (attached) is normally dark in NLN?
Looking at the auto-captured images, indeed you are correct that there is some light where there didn't used to be any.
The work that could have affected this is the placement of the new black glass beam dump (alog 35636). Not that there should be any significant reflection from this dump (it has 5 bounces from black glass), any reflection would go in the opposite direction from the ALSX camera. So, I don't think it could be reflection from this beam dump. What is possible is that the old razor dump was catching stray beams in addition to the main one, and the black glass dump with a slightly smaller profile isn't catching that stray beam.
I think dumping the main POP beam with the new dump is still overall helpful, and we'll see if this extra stray beam actually has any affect on the interferometer. It's small enough in power that it wasn't visible on the IR card we were using, although we didn't utilize an IR viewer to see much dimmer beams.
Right now, I think we should monitor things to see if this beam has any effect, and we can look at ISCT1 again next Tuesday.
Back in observing. Greg reported the calibration code is working. Locked for ~2 hours. Wind is coming back down.
The H1 detchar summary page for 4/19/2017 shows the DMT GDS hoft calibration code has marked the calibration and hoft as OK when we went back into observing mode:
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20170419/plots/H1-MULTI_D80927_SEGMENTS-1176595218-86400.png
This shows the calibration code (calibration 1.1.5) is behaving correctly for now.
(The bug reported here, https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=35569, should only occur if channels needed by the calibration reports bad dataValid flags or the DAC/framebroadcaster gets restarted.)
Peter, Nutsinee
HWSX
Today we started off by making sure that the green beam is aligned to the irises. We quickly found that the outgoing path of the sled beam and the green beam didn't quite overlap. We adjusted HWSX BS and HWSX SOURCE M2 optics to realign sled beam to the irises. After we ensure that the sled and green overlap we centered the HWS beam onto the HWS camera and adjusted the pick off BS to align the green to the BASLER camera (Richard insisted it's not a GigE so I'm going to stop calling it that). We also mount the camera to the height-adjustable mount. We also put the IR bandpass filter onto the camera and verified that the camera can actually see the 790nm sled by putting it right in front of the sled launcher. Right now we can't see the sled on the BASLER camera and we couldn't change the exposure rate (as of now anyway). We need something more sensitive (a flipper mirror alone wouldn't work either since the camera wasn't able to see any light coming off the final mirror before the HWS camera).
HWSY
We wanted to see some HWSY data so we went in to make sure that Y sled was not clipped or whatsoever. The beam looked as bad as usual. We made some minor tweak to the green beam and the sled so that their path overlap. We also identified the top beam to be a beam reflected off the compensation plate (to figure this out we yawed the CP then put it back in place). Since we have only one spare sled left, we decided not to replace it until after the vent work.
HWS Code
Starting from 22:00:21 UTC today (April 18) the code is writing data from HWSY camera. Will revert this configuration as soon as I get some power up/down time.
WP 6584
Pulled new cabling for temperature sensors from the mechanical room into the LVEA. Runs in the LVEA are to the following chambers: HAM2, HAM3, HAM4, HAM5, BSC1, and BSC3. Work required climbing on chambers HAM4 and HAM5. This work is part of the HVAC upgrade being done by Apollo. Did not get to finish the HAM2 cable run. It is currently sitting next to HAM3. Will complete work next maintenance period.
Filiberto Clara
TITLE: 04/18 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC STATE of H1: Lock Acquisition OUTGOING OPERATOR: Nutsinee CURRENT ENVIRONMENT: Wind: 24mph Gusts, 20mph 5min avg Primary useism: 0.16 μm/s Secondary useism: 0.16 μm/s QUICK SUMMARY: Nutsinee has been attempting to relock.
TITLE: 04/18 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Patrick
SHIFT SUMMARY: Not much activities after I took over from Cheryl. Locked at NLN briefly but lost it. Possibly due to wind (now passed 30 mph).
LOG:
19:30 Robert & Evan to End Stations -- turn off PCal camera.
20:20 Fil & Marc out (cable pulling work)
20:36 Robert & Evan back
20:53 Robert to LVEA -- align MC3 camera
20:54 Dave back
21:01 Jenne et al. to ISCT1 to place beam dump.
22:14 Robert done sweeping the LVEA
Evan G., Robert S. Looking back at Keith R.'s aLOGs documenting a changes happening on March 14 (see 35146, 35274, and 35328), we found that one cause seems to be the shuttering of the OpLev lasers on March 14. Right around this time, 17:00 UTC on March 14 at EY and 16:07 UTC at EX, there is an increase in line activity. The correlated cause is Travis' visit to the end station to take images of the Pcal spot positions. The images are taken using the Pcal camera system and needs the OpLevs to be shuttered so that a clean image can be taken without the light contamination. We spoke with Travis and he explained that he disconnected the USB interface between the DSLR and the ethernet adapter, and used a laptop to directly take images. Around this time, the lines seem to get worse in the magnetometer channels (see, for example, the plots attached to Keith's aLOG 35328). After establishing this connection, we went to the end stations to turn off the ethernet adapters for the Pcal cameras (the cameras are blocked anyway, so this active connection is not needed). I made some magnetometer spectra before and after this change (see attached). This shows that a number of lines in the magnetometers are reduced or are now down in the noise. Hopefully this will mitigate some of the recent reports of combs in h(t). We also performed a short test turning off another ethernet adapter for the H1 illuminator and PD. This was turned off at 20:05:16 18/04/2014 UTC and turned back on at 20:09:56 UTC. I'll post another aLOG with this investigation as well.
Good work! That did a lot of good in DARM. Attached are spectra in which many narrow lines went away or were reduced (comparing 22 hours of FScan SFTs before the change (Apr 18) with 10 hours of SFTs after the change (Apr 19). We will need to collect much more data to verify that all of the degradation that began March 14 has been mitigated, but this first look is very promising - many thanks! Fig 1: 20-50 Hz Fig 2: 50-100 Hz Fig 3: 100-200 Hz
Attached are post-change spectra using another 15 hours of FScan SFTs since yesterday. Things continue to look good. Fig 1: 20-50 Hz Fig 2: 50-100 Hz Fig 3: 100-200 Hz
Correction: the date is 18/04/2017 UTC.
Another follow-up with more statistics. The mitigation from turning off the ethernet adapter continues to be confirmed with greater certainty. Figures 1-3 show spectra from pre-March 14 (1210 hours), a sample of post-March 14 data (242 hours) and post-April 18 (157 hours) for 20-50 Hz, 50-100 Hz and 100-200 Hz. With enough post-April 18 statistics, one can also look more closely at the difference between pre-March 14 and and post-April 18. Figures 4-6 and 7-9 show such comparisons with different orderings and threrefore different overlays of the curves. It appears there are lines in the post-April 18 data that are stronger than in the pre-March 14 data and lines in the earlier data that are not present in the recent data. Most notably, 1-Hz combs with +0.25-Hz and 0.50-Hz offsets from integers have disappeared. Narrow low-frequency lines that are distinctly stronger in recent data include these frequencies: 21.4286 Hz 22.7882 Hz - splitting of 0.0468 Hz 27.4170 Hz 28.214 Hz 28.6100 Hz - PEM in O1 31.4127 Hz and 2nd harmonic at 62.8254 Hz 34.1840 Hz 34.909 Hz (absent in earlier data) 41.8833 Hz 43.409 Hz (absent in earlier data) 43.919 Hz 45.579 Hz 46.9496 Hz 47.6833 Hz 56.9730 Hz 57.5889 Hz 66.7502 Hz (part of 1 Hz comb in O1) 68.3677 Hz 79.763 Hz 83.315 Hz 83.335 Hz 85.7139 Hz 85.8298 Hz 88.8895 Hz 91.158 Hz 93.8995 Hz 95.995 Hz (absent in earlier data) 107.1182 Hz 114.000 Hz (absent in earlier data) Narrow low-frequency lines in the earlier data that no longer appear include these frequencies: 20.25 Hz - 50.25 Hz (1-Hz comb wiped out!) 24.50 Hz - 62.50 Hz (1-Hz comb wiped out!) 29.1957 Hz 29.969 Hz Note that I'm not claiming change points occurred for the above lines on March 14 (as I did for the original set of lines flagged) or on April 18. I'm merely noting a difference in average line strengths before March 14 vs after April 18. Change points could have occurred between March 14 and April 18, shortly before March 14 or shortly after April 18.
To pin down better when the two 1-Hz combs disappeared from DARM, I checked Ansel's handy-dandy comb tracker and found the answer immediately. The two attached figures (screen grabs) show the summed power in the teeth of those combs. The 0.5-Hz offset comb is elevated before March 14, jumps up after March 14 and drops down to normal after April 18. The 0.25-Hz offset comb is highly elevated before March 14, jumps way up after March 14 and drops down to normal after April 18. These plots raise the interesting question of what was done on April 18 that went beyond the mitigation of the problems triggered on March 14. Figure 1 - Strength of 1-Hz comb (0.5-Hz offset) vs time (March 14 is day 547 after 9/15/2014, April 18 is day 582) Figure 2 - Strength of 1-Hz comb (0.25-Hz offset) vs time
The bi-monthly End X PCal calibration was performed today. Results can be seen at https://dcc.ligo.org/T1500129. Force coefficient for TxPD is 0.02% from its mean value and force coefficient for RxPD is 0.2% from its mean value.
[Jenne, Vaishali, Karl]
We have replaced the razor beam dump on ISCT1 that was causing scattered light problems (see alog 35538) with an Arai-style black glass dump, provided by the 40m (see 40m log 8089, first style). You can see the new dump just to the left of the PD in the attached photo. I was thinking about sending the reflection from this dump (after several black glass bounces) to the razor dump, but I can't see any reflection with just a card, so skipped this step for now. We can come back to it with an IR viewer if we have more time in the future.
We're on our way to NLN, so maybe we'll see if this helps any, if we happen to get high ground motion sometime.
[Jenne, Vaishali, Karl, Betsy]
Koji pointed out to me that even though the new black glass beam dump had been sitting on a HEPA table at the 40m, since it has been so long since it was cleaned, it could accumulate a bit of dust or film.
So, we temporarily put the razor dump back, disassembled the black glass dump, and with Betsy's guidance cleaned the surfaces of the black glass with first contact. We then reassembled the dump and put it back on the table.
Taking advantage of a few minutes while those working on the cleanroom took a short break, we transitioned to laser hazard so that we could do a fine alignment of the beam dump with the DRMI flashing. The LVEA was transitioned back to laser safe after this brief work was completed, so that the cleanroom team could work more easily.
Jeff K, Jonathan, Dave:
h1calcs was restarted with new code at 10:15 PDT. Jonathan and I took the opportunity to update the H1EDCU_RACCESS.ini file with missing channels, and then restarted the DAQ.
This was h1calcs model change was covered under the continued work described in WP #6572 ECR E1700121 II 7828 This restart was to incorporate a few more EPICs monitor channels (24 or so) to support commissioning of the new SRC Detuning Infrastructure (see LHO aLOG 35547). In addition, I moved the pick-offs for calculated time-dependent correction factors that feed into the next subsequent calculations upstream of the final-answer 128 [sec] low pass. No change was made to the functionality of the infrastructure for h(t), these are all control-room only used redundant infrastructure. The changes to the h1calcs model is actually just changes to common library parts, which have been committed to the userapps repo: /opt/rtcds/userapps/release/cal/common/models/ CAL_CS_MASTER.mdl CAL_LINE_MONITOR_MASTER.mdl These EPICs channels have been added to the various MEDM screens, and those screens have been committed to the userapps repo under /opt/rtcds/userapps/release/cal/common/medm/ CAL_CS_TDEP_F_C_OVERVIEW.adl CAL_CS_TDEP_F_S_OVERVIEW.adl CAL_CS_TDEP_KAPPA_PU_OVERVIEW.adl CAL_CS_TDEP_KAPPA_TST_OVERVIEW.adl CAL_CS_TDEP_OVERVIEW.adl Initial results (i.e. the 30 minute NLN lock we just had) indicate that moving the pick-offs that pass one answer to the next calculation upstream of their 128 sec low-pass has cleaned up the final answers. Plots to come when we have more data.
ascii file saved on desktop folder of RGA machine in control room
Here is a link to a Vertex scan taken a few months ago: https://alog.ligo-wa.caltech.edu/aLOG/uploads/29517_20160907141626_09062016_Vertex_SEM1500_analog.pdf Note: the apparent amplitude discrepancies are due to differing multiplier voltage settings. As such these two scans are for qualitative comparisons only.
Completed WP 6579 and then some.
*left three turbo stations and Kobelco energized, water still valved into QDP80 (#2) for cooling
Follow-up maintenance items:
I replaced the burned-out incandescent lamp with an upgraded LED version (Data Display Products, miniature, wedge-base 24-28VAC white WB200-FW4K28HD)
I de-energized Kobelco and valved out water to QDP80. Tomorrow during commissioning I will enter LVEA and de-energize turbo stations.
I started to repeat the measurement described in llo alog 28797 while we were waiting for LLO to come back up around 20:05 UTC. Since LLO is back, I have stopped and put things back to normal, and Cheryl is running A2L before we go back to observing. (This was about 45 minutes of commisioning).
I got as far as tuning an excitation and finding that an offset of around 0.3 in POSY may be about right, but that we need to wait a long time for the OMC ASC loops and the slow kappa C calculation to settle when making these measurements. The template with the excitation is saved at sheila.dwyer/OMC/OMC_alignment_jitter_check.xml We will continue this Wednesday.
Set the gain of PI modes 27 to zero next time you do this measurement.
As Cheryl noted, mode 27 rang up during your work today (mode 19 was just bleed over from 27). Since we're using OMC DCPD as the error signal for this mode, driving OMC ASC loops changes the phase as seen by this mode such that we must've been driving up the PI; mechanical drive up was real, as it was seen by the TransMon QPD.
This mode is nominally stable after the thermal transient, so as long as you're an hour or so into a lock, you can just set the gain of mode 27 to zero during OMC commissioning.
I also took about 10 minutes with the interferometer locked on RF just now to put some offsets into the OMC alignment loops. These dither loops are very slow, it takes about 3 minutes for them to react to an offset change. For POS X introducing an offset of 0.3 decreases the transmitted power to 88%, for POS Y an offset of 0.3 decreases it to 95% of the power for the normal alingment. I didn't get to check the ANG loops, but it seems like they will need larger offsets (2 or 3).