This chapter contains the following topics:
Internal Components
Note | The CPUs on the Cisco UCS B200 M6 blade server are in opposite locations compared to previous generations. On the UCS B200 M6 blade server, CPU 1 is at the rear of the blade (the side nearest the internal connectors) and CPU 2 is at the front of the blade (the side nearest the faceplate and external ports and connectors). |
1 | Front mezzanine connector | 2 | DIMM slots DIMMs are secured by white or yellow latches. White latches indicate the memory is connected to CPU 1. Yellow latches indicate the memory is connected to CPU 2. |
3 | CPU socket 1 (Populated) CPU 1 connects to DIMMs with a white DIMM latch. This CPU socket must always be populated. If your server will run with only one CPU, the CPU must be installed in this socket. | 4 | CPU socket 2 (Unpopulated) CPU 2 connects to DIMMs with a yellow DIMM latch. This CPU socket is populated in a normal, dual-CPU deployment. If your server will run with only one CPU, the CPU must be installed CPU socket 1. |
5 | CPU heatsink install guide pins | 6 | mLOM connector |
7 | Rear mezzanine connector |
Removing and Installing the Top Cover
The server top cover provides protection and proper airflow for the internal components. The top cover is secured by a release button on the top of the blade.
To remove and replace the server top cover, follow this procedure:
-
Removing the Top Cover
-
Installing the Top Cover
Removing the Top Cover
To remove the server top cover, the blade must be removed from the chassis.
Procedure
Step1 | Press and hold the release button down. |
Step2 | While holding the release button down, lift the rear of the cover up, and slide it off of the blade. |
What to do next
Reinstall the top cover. See Installing the Top Cover.
Installing the Top Cover
Use this procedure to install the server top cover.
Procedure
Step1 | Align the pins on the rear of the server with the channel in the top cover. |
Step2 | Seat the cover on the sheet metal. |
Step3 | Holding the front edge of the server down, slide it forward until it locks into place. |
Replacing the Rear Mezzanine Module
To replace the rear mezzanine module, see the following:
-
Installing the Rear Mezzanine Module
-
Removing the Rear Mezzanine Module
Removing the Rear Mezzanine Module
Use this task to remove the rear mezzanine module.
If you are removing the virtual interface card (VIC) in the mLOM slot, you must first remove the rear mezzanine module.
Procedure
Step1 | Using a #2 Phillips screwdriver, loosen the two rear mezzanine module captive screws. | ||
Step2 | Grasp the rear mezzanine module where “PRESS HERE TO INSTALL” is stamped onto the module. | ||
Step3 | Lift the module to disconnect it from its motherboard connector.
|
What to do next
If you will be removing the virtual interface card (VIC), see Removing a Virtual Interface Card from the mLOM Slot.
Installing the Rear Mezzanine Module
The rear mezzanine module slot is located above the mLOM module slot. Depending on your server, the mLOM module slot could host a virtual interface card (VIC). This procedure assumes that a VIC is installed.
If your server has a VIC, make sure that it is installed before installing the rear mezzanine module. See Installing Virtual Interface Card in the mLOM Slot.
Use this procedure to install a rear mezzanine module.
Procedure
Step1 | Position the rear mezzanine module above the motherboard connector and align the two rear mezzanine module captive screws with the standoff posts on the motherboard. |
Step2 | Firmly press the rear mezzanine module into the motherboard connector where |
Step3 | Using a #2 Phillip screwdriver, tighten the two rear mezzanine module captive screws. |
Replacing CPUs and Heatsinks
When the blade server is shipped from the factory, all the components are installed. The following illustration shows the various parts of the assembled and installed CPU.
1 | Heatsink | 2 | CPU carrier |
3 | CPU | 4 | Bolster Plate on server motherboard |
5 | CPU Socket | 6 | Blade server motherboard |
Any replacement CPUs are shipped in a separate shipping package which contains the CPU, CPU carrier, and a fixture.
To replace a CPU, you will remove the CPU from the fixture, then install the CPU into the CPU socket on the server motherboard. See the following topics:
-
Required Tools for CPU and Heatsink Replacement
-
Removing CPUs and Heatsinks
-
Installing CPUs and Heatsinks
Required Tools for CPU and Heatsink Replacement
The following tools are required for replacing server CPUs and heatsinks:
-
ESD-safe workspace, such as a rubberized mat, where you can safely put components that are out of the server.
-
ESD gloves
-
T30 Torx driver
-
M6 CPU Fixture (UCS-CPUATI-3=)
-
Cleaning Kit (UCSX-HSCK=)
-
Thermal Grease (UCS-CPU-TIM=)
Note | Blades can ship with either a dual CPU or single CPU configuration. If your blade is a single CPU config, the unpopulated CPU socket will ship with a dust cover.
|
Removing CPUs and Heatsinks
Use the following procedure to remove an installed CPU and heatsink from the blade server. With this procedure, you will remove the CPU from the motherboard, disassemble individual components, then place the CPU and heatsink into the fixture that came with the CPU.
Procedure
Step1 | Detach the CPU and heatsink (the CPU assembly) from the CPU socket.
| ||||
Step2 | Remove the CPU assembly from the motherboard.
| ||||
Step3 | Attach a CPU dust cover (UCS-CPU-M6-CVR=) to the CPU socket.
| ||||
Step4 | Detach the CPU from the CPU carrier by disengaging CPU clips and using the TIM breaker.
| ||||
Step5 | Transfer the CPU and carrier to the fixture.
| ||||
Step6 | Use the provided cleaning kit (UCSX-HSCK) to remove all of the thermal interface barrier (thermal grease) from the CPU, CPU carrier, and heatsink.
|
What to do next
Choose the appropriate option:
-
If you will be installing a CPU, go to Installing CPUs and Heatsinks.
-
If you will not be installing a CPU, verify that a CPU socket cover is installed. This option is valid only for CPU socket 2 because CPU socket 1 must always be populated in a runtime deployment.
Installing CPUs and Heatsinks
Use this procedure to install a CPU if you have removed one, or if you are installing a CPU in an empty CPU socket. To install the CPU, you will move the CPU to the fixture, then attach the CPU assembly to the CPU socket on the server mother board.
Procedure
Step1 | Remove the CPU socket dust cover (UCS-CPU-M6-CVR=) on the server motherboard.
| ||||
Step2 | Grasp the CPU fixture on the edges labeled PRESS, lift it out of the tray, and place the CPU assembly on an ESD-safe work surface. | ||||
Step3 | Apply new TIM.
| ||||
Step4 | Attach the heatsink to the CPU fixture.
| ||||
Step5 | Install the CPU assembly onto the CPU motherboard socket.
|
Replacing Memory DIMMs
To replace a memory DIMM, see the following topics:
-
DIMM Slot Identifiers
-
Memory Population Guidelines
-
Removing DIMMs or DIMM Blanks
-
Installing DIMMs or DIMM Blanks
-
Memory Performance
-
Memory Mirroring and RAS
DIMM Slot Identifiers
This blade server contains 32 DIMM slots—16 per CPU
To assist with identification, each DIMM slot displays its memory processor and slot ID on the motherboard. For example, P1 A1 indicates slot A1 for processor 1.
Also, you can further identify which DIMM slot connects to which CPU by the latch color for the DIMM slot.
-
All DIMM slots with white latches are connected to CPU 1.
-
All DIMM slots with yellow latches are connected to CPU 2.
-
DIMM slots with white and yellow latches are oriented 180 degrees from each other due to the keying in the slot. While installing DIMMs in sockets with white latches, you will need to rotate the DIMMs 180 degrees.
Caution
If you feel resistance while seating a DIMM into its socket, do not force the DIMM or you risk damaging the DIMM or the slot. Check the keying on the slot and verify it against the keying on the bottom of the DIMM. When the DIMM's and slot's keys are aligned, reinstall the DIMM.
For each CPU, each set of 16 DIMMs is arranged into 8 channels, where each channel has two DIMMs. Each DIMM slot is numbered 1 or 2, and each DIMM slot 1 is blue and each DIMM slot 2 is black. Each channel is identified by two pairs of letters and numbers where the first pair indicates the processor, and the second pair indicates the memory channel and slot in the channel.
-
Channels for CPU 1 are P1 A1 and A2, P1 B1 and B2, P1 C1 and C2, P1 D1 and D2, P1 E1 and E2, P1 F1 and F2, P1 G1 and G2, P1 H1 and H2.
-
Channels for CPU 2 are P2 A1 and A2, P2 B1 and B2, P2 C1 and C2, P2 D1 and D2, P2 E1 and E2, P2 F1 and F2, P2 G1 and G2, P2 H1 and H2.
The following figure shows how DIMMs and channels are physically laid out and numbered. DIMM channel and slot IDs for CPU 1 are shown in blue text, and DIMM channel and slot IDs for CPU 2 are shown in black text.
Memory Population Guidelines
The following is a partial list of memory usage and population guidelines. For detailed information about memory usage and population, download the Cisco UCS C220/C240/B200 M6 Memory Guide.
Caution | Only Cisco memory is supported. Third-party DIMMs are not tested or supported. |
-
All DIMMs must be all DDR4 DIMMs and/or DDR4 and Intel Optane persistent memory 200 series (Intel Optane PMem 200 series) DIMMs.
-
x4 DIMMs are supported.
-
DDR4 memory is supported as documented in the Cisco UCS B200 M6 Spec Sheet. See https://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-b-series-blade-servers/b200m6-specsheet.pdf.
-
DDR4 and Intel Optane Persistent Memory Series 200 DIMMs are supported as specified in the Cisco UCS B200 M6 Spec Sheet. See https://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-b-series-blade-servers/b200m6-specsheet.pdf.
-
For memory population rules, see the Cisco UCS B200 M6 Spec Sheet. See https://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-b-series-blade-servers/b200m6-specsheet.pdf.
-
DIMMs must be loaded lowest number slot first.
-
Memory ranks are 64- or 72-bit chunks of data that each memory channel for a CPU can use. Each memory channel can support a maximum of 8 memory ranks. For quad-rank DIMMs, a maximum of 2 DIMMs are supported per channel (4 ranks * 2 DIMMs).
-
Mixed ranks of DIMMs are allowed in the same channel, but you must populate higher quantity rank DIMMs in the lower numbered slots.
-
All slots must be populated with either a DIMM or a DIMM blank. For installation instructions, see Installing DIMMs or DIMM Blanks.
Memory Population Order
The Cisco UCS B200 M6 blade has two memory options, DIMMs only or DIMMs plus Intel Optane PMem 200 series memory.
Memory slots are color coded, blue and black. The color-coded channel population order is blue slots first, then black.
For optimal performance, populate DIMMs in the order shown in the following table, depending on the number of CPUs and the number of DIMMs per CPU. If your server has two CPUs, balance DIMMs evenly across the two CPUs as shown in the table.
Note | The table below lists recommended configurations. Using 3, 5, 7, 9, 10, 11, or 13-15 DIMMs per CPU is not recommended. Other configurations will result in reduced performance. |
The following table shows the memory population order for DDR4 DIMMs.
Number of DDR4 DIMMs per CPU (Recommended Configurations) | Populate CPU 1 Slot | Populate CPU2 Slots | ||
P1 Blue #1 Slots P1_slot-ID | P1 Black #2 Slots P1_slot-ID | P2 Blue #1 Slots P2_slot-ID | P2 Black #2 Slots P2_slot-ID | |
1 | A1 | - | A1 | - |
2 | A1, E1 | - | A1, E1 | - |
4 | A1, C1, E1, G1 | - | A1, C1, E1, G1 | - |
6 | A1, C1, D1, E1, G1, H1 | - | A1, C1, D1, E1, G1, H1 | - |
8 | A1, B1, C1, D1, E1, F1, G1, H1 | - | A1, B1, C1, D1, E1, F1, G1, H1 | - |
12 | A1, C1, D1, E1, G1, H1 | A2, C2, D2, E2, G2, H2 | A1, C1, D1, E1, G1, H1 | A2, C2, D2, E2, G2, H2 |
16 | All populated (A1 through H1) | All populated (A2 through H2) | All populated (A1 through H1) | All populated (A2 through H2) |
Note | CPU 1 and CPU 2 must be populated identically. |
Note | For only the 8+1 DIMM configuration, Memory mode is not supported. All other DIMM configuration support Memory mode and all other modes. |
Total Number of DIMMs per CPU | DDR4 DIMM Slot | Intel Optane PMem 200 Series DIMM Slot |
4+4 DIMM | A1, C1, E1, G1 | B1, D1, F1, H1 |
8+1 DIMMs | A1, B1, C1, D1, E1, F1, G1, H1 | A2 |
8+4 DIMMs | A1, B1, C1, D1, E1, F1, G1, H1 | A2, C2, E2, G2 |
8+8 DIMMs | A1, B1, C1, D1, E1, F1, G1, H1 | A2, B2, C2, D2, E2, F2, G2, H2 |
DIMM Slot Keying Consideration
DIMM slots with white and yellow latches are oriented 180 degrees from each other. In the center memory column, slots with white and yellow latches are next to each other, so the DIMM orientation must change depending on which slots you are populating with DIMMs.
Each DIMM slot has a key that fits a notch in the DIMM, and because of the 180-degree orientation difference, the DIMM slot keys are at a different location for DIMM slots with white and yellow latches.
1 | DIMM slot key |
2 | Example of DIMM slot key location difference |
When installing a DIMM, always make sure that the key in the DIMM slot lines up with the notch in the DIMM.
Caution | If you feel resistance while seating a DIMM into its socket, do not force the DIMM or you risk damaging the DIMM or the slot. Check the keying on the slot and verify it against the keying on the bottom of the DIMM. When the slot's key and the DIMM's notch are aligned, reinstall the DIMM. |
Removing DIMMs or DIMM Blanks
The server top cover must be removed to access the DIMM slots.
To remove a DIMM or a DIMM blank (UCS-DIMM-BLK=) from a slot on the blade server, follow these steps.
Procedure
Step1 | Grasp each DIMM baffle and lift it off of the blade. Each DIMM baffle is fitted onto a standoff, so you need to pull straight up far enough to disconnect the baffle from the standoff.
| ||
Step2 | Open both DIMM connector latches by pushing them away from each other. | ||
Step3 | Grasp each end of the DIMM or blank and lift it out of the socket. | ||
Step4 | If you are removing a DIMM blank and installing a DIMM, keep the DIMM blank in safe place.
|
What to do next
Go to Installing DIMMs or DIMM Blanks.
Installing DIMMs or DIMM Blanks
To install a DIMM or a DIMM blank (UCS-DIMM-BLK=) into a slot on the blade server, follow these steps.
Caution | White and yellow DIMM latches are oriented 180 degrees from other. Check the keying on the DIMM and its slot to verify that they are properly aligned before installing the DIMM. See DIMM Slot Identifiers. |
Procedure
Step1 | Open both DIMM connector latches. | ||
Step2 | Insert the DIMM and press evenly on both ends until it clicks into place in its slot.
| ||
Step3 | Press the DIMM connector latches inward slightly to seat the DIMM or blank fully. | ||
Step4 | Populate all slots with a DIMM or DIMM blank. A slot cannot be empty, so make sure all slots have either a DIMM or DIMM blank before reinserting the blade. | ||
Step5 | Install each DIMM baffle.
|
Memory Performance
When considering the memory configuration of the blade server, there are several things to consider. For example:
-
When mixing DIMMs of different densities (capacities), the highest density DIMM goes in slot 1 then in descending density.
-
Besides DIMM population and choice, the selected CPU(s) can have some effect on performance.
Memory Mirroring and RAS
The Intel CPUs within the blade server support DDR4 memory mirroring only in 8 and 16 DIMM configurations on each CPU. If memory mirroring is used, DRAM size is reduced by 50 percent for reasons of reliability.
Replacing a Virtual Interface Card
Use the following topics to replace the VIC:
-
Removing a Virtual Interface Card from the mLOM Slot
-
Installing Virtual Interface Card in the mLOM Slot
Removing a Virtual Interface Card from the mLOM Slot
Use this task to remove a VIC card from the mLOM module slot. You might need to remove additional components to gain access to the mLOM module slot.
Procedure
Step1 | If a rear mezzanine module is installed, remove it to provide access to the mLOM slot. | ||
Step2 | Using a #2 Phillips screwdriver, loosen the captive screw. | ||
Step3 | Grasp the VIC card where | ||
Step4 | Pull the VIC connector up to disconnect the VIC connector from the motherboard mLOM connector.
|
What to do next
Go to Installing Virtual Interface Card in the mLOM Slot.
Installing Virtual Interface Card in the mLOM Slot
Use this task to install a virtual interface card (VIC) into the motherboard connector in the mLOM slot. You might need to remove additional components to gain access to the mLOM slot.
Procedure
Step1 | If a rear mezzanine module is installed, remove it to provide access to the mLOM slot. See Removing the Rear Mezzanine Module |
Step2 | Position the VIC connector above the motherboard connector and align the captive screw with the standoff post on the motherboard. |
Step3 | Firmly press the VIC connector into the motherboard connector where |
Step4 | Using a #2 Phillips screwdriver, tighten the captive screw. |
Replacing the Front Mezzanine Module
In the front mezzanine slot, the Cisco B200 M6 supports a front mezzanine module. The front mezzanine module mounts onto the motherboard, and provides connection to the power plane and communication to other components in the server. Depending on the type, the front mezzanine module accepts the server's M.2 mini storage modules or front-loading 7 mm SSDs.
The following front mezzanine modules are supported.
-
A front mezzanine module that supports a mini-storage module for M.2 drives.
-
A front mezzanine module that supports 12G SAS RAID.
To replace the front mezzanine module, use the following procedures:
-
Removing the Front Mezzanine Module
-
Installing the Front Mezzanine Module
Removing the Front Mezzanine Module
Both front mezzanine modules attach to the blade through four threaded standoffs. Two alignment pins in the blade's front mezzanine slot enforce the correct position for the module on the blade.
Note | The front mezzanine mini-storage module for M.2 drives can be removed with the mini M.2 drives in place. However, if you want to remove them, you can. The front mezzanine module for 12G SAS RAID supports front-loading 7 mm drives. You must remove the drives before removing the front mezzanine module. |
Procedure
Step1 | Using a #2 Phillips screwdriver, loosen the four screws that secure the front mezzanine module to the standoffs. | ||
Step2 | Grasp the front mezzanine module where it is labeled
| ||
Step3 | Holding the front mezzanine module level, pull straight up to remove it from the blade. |
Installing the Front Mezzanine Module
The front mezzanine module occupies the blade's front mezzanine slot. Guide pins on the blade help align with the guide holes on the front mezzanine module.
Use the following procedure to install the front mezzanine module.
Procedure
Step1 | Orient the guide holes on the front mezzanine module with the guide pins on the blade. |
Step2 | Align the thumbscrews with the threaded standoffs. |
Step3 | Grasp the module where it is labeled |
Step4 | Using a #2 Phillips screwdriver, tighten the thumbscrews to secure the module to the blade. |
Replacing a Cisco Boot-Optimized M.2 RAID Controller
The Cisco Boot-Optimized M.2 RAID controller sits in the front mezzanine storage slot and provides RAID connectivity for the M.2 SSD server storage. The RAID controller consists of small PCB daughter card, an M.2 SATA SSD drive carrier, and individual M.2 SSDs. The entire RAID controller and the individual SATA SSDs are field replaceable.
To replace the M.2 RAID optimized RAID controller, see the following topics:
-
Removing a Cisco Boot-Optimized M.2 RAID Controller
-
Installing a Cisco Boot-Optimized M.2 RAID Controller
Removing a Cisco Boot-Optimized M.2 RAID Controller
Use this procedure to remove a Cisco boot-optimized M.2 RAID controller.
Before you begin
Use this procedure to remove the M.2 RAID controller from the blade. The M.2 RAID controller consists of two M.2 mini storage carriers, and each carrier can contain a pair of M.2 SATA drives. The embedded M.2 SATA mini storage SSDs (drives) in the front mezzanine mini storage module are not hot swappable.
Procedure
Step1 | If you have not already removed the top cover, do so now. See Removing the Top Cover. | ||
Step2 | (Optional) Remove the front mezzanine mini storage module for M.2 from the server. See Removing the Front Mezzanine Module. | ||
Step3 | Remove the M.2 mini storage carrier from the server.
| ||
Step4 | If you are transferring SATA M.2 drives from an old M.2 controller to a replacement controller, complete the following steps before installing the replacement controller:
|
What to do next
Installing a Cisco Boot-Optimized M.2 RAID Controller
Installing a Cisco Boot-Optimized M.2 RAID Controller
Use this procedure to install the Cisco boot-optimized M.2 RAID controller. Each slot on the controller is labeled with 1 and 2 to identify each slot for the carrier.
Before you begin
If you need to replace the RAID controller, the individual M.2 storage drives must be included in the replacement RAID controller. Make sure to remove them from the original RAID controller and install them on the RAID controller you are installing. See Removing a Cisco Boot-Optimized M.2 RAID Controller.
Procedure
Step1 | Align the two mounting holes on the carrier with the guide pins on the storage module. |
Step2 | Lower the carrier onto the controller on both ends, making sure the securing clips snap in. |
Step3 | Simultaneously push on the four corners of the carrier to fully seat it. |
Step4 | If you removed the front mezzanine module for M.2, reinstall it now. See Installing the Front Mezzanine Module. |
Step5 | Reinsert the blade server into the server chassis. |
Replacing 7 mm Front Mezzanine Drives
The Cisco UCS B200 M6 blade has a maximum of two front-loading 7 mm drives accessible through the front of the blade. Drives can be either SATA or NVMe. Drives are field-replaceable.
To replace the blade's front drives, use the following procedures:
-
Removing a 7 mm SATA SSD
-
Installing a 7 mm SATA SSD
-
Removing an NVMe Drive
-
Installing an NVMe Drive
Removing a 7 mm SATA SSD
Front-loading 7 mm SATA SSDs are hot pluggable/hot swappable.
Use this procedure to remove a front-loading 7 mm SATA SSD.
Procedure
Step1 | Grasp the SSD by its finger holds and pinch them together. |
Step2 | Slide the SSD out of the drive bay. |
What to do next
Reinstall a SATA SSD. See Installing a 7 mm SATA SSD.
Installing a 7 mm SATA SSD
If you removed a 7 mm SATA SSD, use this procedure to install another 7 mm SATA SSD.
Procedure
Step1 | Check the drive label to ensure that you are installing a SATA SSD. |
Step2 | Check the label on the faceplate to verify that the SSD is not upside down. As a safeguard, the drives are designed with keys to enforce proper installation. |
Step3 | Holding the SSD level, align it with the empty drive bay, then slide the SSD completely into the drive bay. |
Removing an NVMe Drive
Front facing NVMe drives are hot pluggable/hot swappable. Use this procedure to remove an NVMe drive.
Caution | NVMe drives are physically and visually the same as NVMe drives except for the label on the drive faceplate. When you remove an NVMe drive, make sure to install another NVMe drive. |
Procedure
Grasp the drive by its finger holds and slide it out of the drive bay.
What to do next
Reinstall an NVMe drive. See Installing an NVMe Drive.
Installing an NVMe Drive
If you removed an NVMe drive, use this procedure to install another NVMe drive.
Caution | An NVMe drive are physically and visually the same as a SATA drive except for the label on the drive faceplate. Only install an NVMe drive into a bay where you removed a NVMe drive. |
Procedure
Step1 | Check the drive label to ensure that you are installing a NVMe drive. |
Step2 | Orient the drive so that the gasket is facing up. Also, you can check the label on the faceplate to verify that the drive is oriented correctly. |
Step3 | Holding the drive level, align it with the empty drive bay, then slide the drive in until it no longer moves. |
Replacing a Front Mezzanine Drive Blank
The Cisco UCS B200 M6 blade has two drive bays on the blade faceplate. A minimum of one drive must be installed.
A front mezzanine drive blank (UCSB-FBLK-M6) must be installed in any empty drive bay. Do not operate the blade with an empty drive bay.
To replace a front mezzanine drive blank, use the following procedures:
-
Removing a Drive Blank
-
Installing a Drive Blank
Removing a Drive Blank
Drive blanks are accessible from the front of the blade. Use this procedure to remove a drive blank (UCSB-FBLK-M6).
Note | Do not operate the blade with an empty drive bay. Always install a drive blank in a drive bay that does not have a drive installed. |
Procedure
Step1 | Pinch the two retaining tabs towards each other. |
Step2 | While holding the retaining tabs inward, pull the drive blank out of the drive bay. |
What to do next
Choose the appropriate option:
-
Install a front-loading 7 mm SATA SSD. See Installing a 7 mm SATA SSD.
-
Install a drive blank. See Installing a Drive Blank.
Installing a Drive Blank
The blade has two drive bays accessible from the front of the blade. In a minimum configuration, the blade has one drive installed and one empty drive bay. Any empty drive bay must have a drive blank installed (UCSB-FBLK-M6). Do not operate the blade without a drive blank installed in an empty drive bay.
Use this procedure to install a drive blank.
Procedure
Step1 | Grasp the drive blank by the retaining tabs. |
Step2 | Holding the drive blank level, align it with the empty drive bay and slide it into the bay until it no longer moves. |
Removing the Trusted Platform Module (TPM)
The TPM module is attached to the printed circuit board assembly (PCBA). You must disconnect the TPM module from the PCBA before recycling the PCBA. The TPM module is secured to a threaded standoff by a tamper-resistant screw. If you do not have the correct tool for the screw, you can use a pair of pliers to remove the screw.
Before you begin
Note | For Recyclers Only! This procedure is not a standard field-service option. This procedure is for recyclers who will be reclaiming the electronics for proper disposal to comply with local eco design and e-waste regulations. |
To remove the Trusted Platform Module (TPM), the following requirements must be met for the server:
-
It must be disconnected from facility power.
-
It must be removed from the equipment rack.
Procedure
Step1 | Locate the TPM module. |
Step2 | Using a 6 mm slotted screwdriver or pliers, loosen the TPM screw and remove the TPM from the motherboard. |
Step3 | Dispose of the TPM properly. |
What to do next
Remove and dispose of the PCB Assembly. See Recycling the PCB Assembly (PCBA).
Recycling the PCB Assembly (PCBA)
Each blade server has a PCBA that is connected to the blade server's faceplate and sheet metal tray. You must disconnect the PCBA from the blade server's faceplate and tray to recycle the PCBA. Each blade server is attached to the faceplate and tray by the following:
-
Faceplate: Two M3 3mm screws.
-
Tray:
-
Five M3 screws
-
Seven hex nut standoffs
-
You will need to recycle the PCBA for each blade server.
Before you begin
Note | |
To remove the printed circuit board assembly (PCBA), the following requirements must be met:
-
The server must be disconnected from facility power.
-
The server must be removed from the server chassis.
You will find it helpful to gather the following tools before beginning this procedure:
-
Screwdrivers: One each of T10 and T30 Torx; Phillips #1 and #2; 8mm and 3 mm slotted.
-
Hex nut drivers: One each of 8 mm and 4.5 mm.
Procedure
Step1 | (Optional) If the CPUs and heatsinks are still installed, remove them.
| ||||||||
Step2 | (Optional) If the front mezzanine module for 12 G SAS is still installed, use a #2 Phillips screwdriver and remove it. | ||||||||
Step3 | (Optional) If the front mezzanine module for M.2 min storage is still installed, use a #2 Phillips screwdriver and remove it. | ||||||||
Step4 | (Optional) If the rear mezzanine module is still installed, use a #2 Phillips screwdriver and remove it. | ||||||||
Step5 | Using a #2 Phillips screwdriver, unscrew the captive screw and remove the mLOM VIC. | ||||||||
Step6 | Using a 3mm slotted screwdriver, rotate each of the M3 faceplate screws counter clockwise until it disengages. | ||||||||
Step7 | For each DIMM air baffle, grasp it and disconnect it from the blade. | ||||||||
Step8 | Remove the TPM. See Removing the Trusted Platform Module (TPM). | ||||||||
Step9 | Disconnect the motherboard from the blade sheet metal:
| ||||||||
Step10 | Recycle the motherboard in compliance with your local recycling and e-waste regulations. |