Currently the DFSClient in Hadoop has this functionality. Basically a DFSClient instance will load the host/port information of both NN from the configuration. When it sends a RPC request, it will first try one of the NNs (specifically, the first one specified in the configuration). Then if the NN is actually in standby state, a StandbyException will be sent from the NN to the client. The client will then automatically failover to the other NN and sends the request there. This client failover process can also happen when the first NN fails and the client receives exceptions like ConnectException.
This client failover mechanism has also been implemented in webhdfs.