Abstract: Place cells, which are considered to be the neural basis for navigation in rodents and possibly also other vertebrates, are neurons in the hippocampus that only fire if the animal is at a particular location in its environment, independent of its orientation. We present a computational network model that is able to quickly learn place cell-like units in a fixed environment based purely on visual input. First, a hierarchical network is trained to produce slowly varying output signals, which are independent of the orientation of the simulated animal. Then a simple linear transformation maximizes sparseness and leads to units with localized place fields as seen in the hippocampus. The results depend on the statistics of the motion of the simulated animal and can also reproduce head direction cells and view cells, i.e. cells that selectively respond if the animal has a certain orientation or looks at a particular point in the environment. The sparseness step can be realized by a neurogenesis process that recruits new cells whenever a qualitatively new stimulus appears. This follows the view that the function of neurogenesis is to adapt the dentate gyrus to new input stimuli without compromising the representation of previously seen stimuli.