This paper introduces a generalization of the notion of Nash equilibrium (NE), namely quantal response equilibrium (QRE). In the QRE, radio devices choose their transmit/receive configuration taking into account that the estimation of their own performance contains a noise component. Here, it is shown that the notion of QRE neatly models decentralized self-configuring networks (DCSN) where feedback messages are impaired by quantization noise or decoding errors. The main contribution of the paper is twofold. First, we show that under the presence of noise in the estimation of the achieved performance, classical dynamics such as best response, fictitious play and reinforcement learning do not converge to equilibrium. Second, we introduce a learning technique which is robust against noise and converges to a QRE in certain classes of games. We present numerical results in the context of a 2-dimensional parallel interference channel with two transmitters aiming to maximize their individual spectral efficiency by tuning the power allocation policy.