The only surface option I saw related to opacity was Cutout Opacity, which is already set to 1
There are two sources of opacity. The first is an "opacity map." Hairs almost always use this, because they want to get more transparent as you get towards the end of the hair. The second is "Cutout Opacity," which essentially is multiplied by the opacity value in the map. So if a particular pixel in the map says "Opacity 0.6" and "Cutout opacity" is set to 0.5, you net out at 0.3.
As
no__name said, more iterations will probably help some, as it will give DS more of a chance to diddle with those areas. Areas with transparency tend to be the last in the image to converge, as iRay needs to do a lot more calculations in that area due to both having to calculate the first (partially transparent) surface it hits and then figuring out more from what's behind it. dForce hair tends to be worse.
FYI, the way that hairs tend to be constructed (not including strand-based hairs) is as a set of "strips," each of which then have textures on them to make them look like hair. Think of painting a piece of cellophane with very fine stripes representing different strands of hair, and then arranging for some degree of transparency - particularly down near the ends. The strips are then arranged in an overlapping pattern to make it look like real hair. With dForce hairs, the strips tend to be much smaller, so that each of them can move more freely with respect to one another. (With non-dForce hair, the creator can use larger ones and sort of "layer them" manually, because he/she doesn't have to worry about them moving except in response to the morphs he/she creates. But for dForce to work, you have to have a lot of smaller strips. This tends to mean a lot more calculation for iRay (after the dForce-ing is done, I mean) because there tends to be many more layers to work through. As a result, dForce hairs tend to be much more render-intensive, and slower to converge.