We consider the problem of automatically and efficiently computing
models of constraints, in the presence of complex background
theories such as floating-point arithmetic. Constructing models, or
proving that a constraint is unsatisfiable, has various
applications, for instance for automatic generation of test
inputs. It is well-known that a naive encoding of constraints
into simpler theories (for instance, bit-vectors or propositional
logic) often leads to a drastic increase in size, or that it is
unsatisfactory in terms of the resulting space and runtime
demands. We define a framework for systematic application of
approximations in order to improve performance. Our method is more
general than previous techniques in the sense that approximations
that are neither under- nor over-approximations can be used, and it
shows promising performance on practically relevant benchmark
problems.