At ~12:40 PST the PSL HPO shut off due to a trip of the HPO power watchdog; the FE was still running. The laser restarted without issue, but once again we were having issues engaging the injection locking. To get a reasonable power out of the HPO we had to increase the operating current for DB1 again, this time up to 59.5 A (from 56.6 A); this unfortunately did not help with the injection locking, so we let the laser warm up for an hour or so to see if this helped. It didn't. Therefore, Peter and I went into the PSL enclosure at about 14:30 PST to take a look at the alignment onto the injection locking PD. The PDH error signal looked to be a little out of phase, but other than that looked fine, however no realignment of the beam into the PD improved the situation with the injection locking. Just to see what happens, Peter removed the pump light filter from the front of the PD; the injection locking engaged almost immediately. The lock broke when Peter reinstalled the pump light filter, but relocked almost immediately. Possibly not enough light on the PD?
Looking into the trip, it appears that a rapid decay of DB1 caused a drop in laser power, which eventually triggered the power watchdog. This seems to be the cause behind last Saturday's trip as well. The two attached trends show both laser trips; the 1st is the power out of the HPO and the 2nd is the relative power (in %) of the 4 HPO DBs. It is clear that DB1 was rapidly decaying, especially compared to the other 3 DBs (DB4 also shows the same decay pattern, but its relative power out is much higher than DB1 and the decay is not as fast so it is unlikely to be part of the issue), and this decay tracked with the drop of the HPO output power, leading me to the conclusion that the decay of DB1 caused both of these trips. This is somewhat surprising as DB1 was swapped out last June, while DBs 2, 3, and 4 are all the original DBs installed with the PSL in 2011/2012. DB1 will need to be swapped at the next opportunity if the HPO is to survive the month.
The laser is back up and running to enable commissioning while we prepare for the diode box swap. If it trips off overnight or after hours we will take a look at it in the morning; given the nature of the trips any restart attempts will have to be done onsite. However, the odds of recovering from another trip are low; we are driving DB1 pretty hard to keep the laser up and running right now, so should the laser trip off again I would prefer to swap the DB before attempting a restart. We will check on the laser first thing in the morning.
Filed FRS 9691 for this trip.
Also filed FRS 9692 for Saturday's trip, which looks to be identical to this one.
This means the EPICS alarm is major. It has been acknowledged and will remain in alarm until back to normal ops in a month or two. WP 7271
Started EY vent at 11:30 am local. Currently at 1.5 Torr; purge air valved out till after lunch. I needed to reboot local Beckhoff computer because of screen freezing up, which caused CC gauges to trip and CP7 PID settings to default to values of zero. Gerardo did a BURT restore.
Purge air DP measures -46degC.
(Patrick, Gerardo)
Burtrestored h0vacey to 9:00.
Vent: CS post vent cleanup and close out efforts finishing up. Venting of End-Y will start today. Doors may come off on Wednesday. Prep work continues at End-Y. Maintenance: Hugh will be working on lifting the 16” conflate to the top of HAM6. Hugh is servicing HEPI hydraulics at End-Y, and accumulator pressure checks in the corner station. Peter will be visiting the PSL alignment; it seems to have drifted a bit after the holidays. CDS rebooting servers and computers to pick up security patches. Daniel is doing some Beckhoff work for the Squeezer implementation. There is a pending ASC model reboot. Safety Message: Be aware there are large animals around the site and roads, especially in the mornings. The roads are still very slippery in the mornings. This morning there was a truck in the pond next to Twin Bridges road.
For reference
Evidently:
all other plots show congruency with these events.
Added 300 ml of water to the crystal chiller.
Working in EE lab swapping cable connector needed for tomorrow. Will first shut down HAM5/6 AIP pump cart and check in on Kobelco unit. Expect to be here for 1 - 3 hours and will make a comment to this entry when leaving.
Initial attempt at custom cable + connector combination didn't work due to cable's inner insulator diameter being too large for connector internals -> Have a new idea that should work and will try again tomorrow. 2035 hrs. local -> Leaving site now.
As per alog entry 40022 the oscillator shutdown this morning. The laser status screen indicated that there was a power watchdog trip and a head 1-4 flow error. Looking at trend data for the flow rates through the heads, the lowest one was head 3. However its flow rate was relatively constant for the previous 12 hours - the increase at the end is because I restarted the crystal chiller to see if there was a blockage in the system, because previously I remembered all the flow rates to be around the 0.7 lpm mark. My conclusion is that the power watchdog was tripped because the injection locking broke (for unknown reasons at this point in time). I was a little surprised to find that the power reported by head 1 was so low and increased the current from 53.9 A to 56.6 A. This improved the output of the locking photodiode from ~0.4 to 0.6 V to values consistently greater than 1 V as the system was trying to acquire lock. The servo gain control voltage was lowered from 0.50 to 0.00 V. At this point it looks like the alignment into the locking photodiode needs to be adjusted, as the alignment may have changed during the holiday shutdown period as the system cooled down. I am opting to wait until Monday to make that adjustment. The other servos came back on line without too much trouble. The power stabilisation requires some looking at however.
The laser shutdown was triggered by the power watchdog, which in turn was activated by the injection locking losing lock. A cursory glance at a number of signals does not explain why the injection locking broke. It might be that the SR560 that was added to the injection locking servo to provide some low frequency boost gain, temporarily saturated. That's conjecture on my part as I do not know of any monitor signals that it provides.
Sheila, Daniel, Nutsinee
We tested Beckhoff communications to the SQZ chassis on the squeezer rack and ISC rack. Below is the summary:
VCO
VCXO
Phase Shifter/Delay Line (U32, U23, U19)
6MHz Demod (U31)
CLF Common Mode Board Servo
SHG Common Mode Board Servo
LO and OPO Common Mode Board Servo
PZT Driver (There's only one PZT driver chassis that controls all the SQZ PZTs)
Binary IO (whitening chassis)
What's left to do?
Since every chassis has been tested to receive the signal properly prior to the installations (according to Daniel) I didn't bother to test that again (except for the VCO chassis, Sheila tested that one).
The demod power-OK readbacks for the 3MHz (LO/HD) and 6MHz (CLF) are now "working". There is no physical readback of these channels, so the value is just ignored. There is a physical readback for the OPP/SHG demod that is working.
For each of the two demods, SHG/OPO and LO/HD, the channels were switched in TwinCAT. This is now corrected.
The power-OK readback for the 42.4MHz RF amplifier in the CER seems broken.
Finished up the testing today. Here's the update:
TTFSS
Whitening Chassis (PD mon, SQZ rack)
------------------------------------------------------------------------------------------------------------
Stuff from last week:
CLF and SHG Common Mode board
VCO
Phase shifter
Demod
I was mproving the flashing in the IMC, and then something happened, and there was no light into the IMC.
I collected 14 seconds of data on channels I think might help an expert explain, and have attached the plots.
NPRO is running, and OSC_PD_AMP_DC_OUT shows 35W, so that part of the laser in on.
Called Peter, he'll take a look tomorrow.
Filed FRS 9692 for this trip.
[Cheryl, Jenne]
This is going to be a short summary, and we'll comment with more details after we've had a chance to eat dinner. But, it seems like the correct IMC Refl beam is coming out of the vacuum onto the table, based on power meter measurements. With MC3 misaligned, we measured about 65mW on IOT2, compared with 65-71mW on the PSL table (hard to get a good measurement at either location, but the numbers match pretty well, so it's not like this is a ghost beam or something).
In the end, we put the IMC mirrors and the PZT back to the locations that were known to be good on Wednesday (HWP was still in, but refl beam on IOT2 looked reasonable at that time). Cheryl did some adjustments to the bottom periscope mirror in the PSL enclosure, and we seem to be getting the right beam out. However, the beam on the table looks not so great, so we're not 100% sure that we're really happy with the refl beam, but it's at least the right beam coming out.