I usually do not take performance into account when deciding the number of tablespaces or datafiles, for the most part. You should use your tablespaces to logically segregate your segments. For instance, the HR application's data is in one tablespace and the Accounts Receivable data is in another tablespace. Also keep in mind that your tablespace is the smallest unit of backup you should be performing. If your database is very large, then you might back up one tablespace tonight and another tablespace tomorrow night. From a manageability perspective, I prefer one datafile per tablespace, but that is not possible in many situations. Provided your OS supports files larger than 2GB, the max datafile size is 32GB for a tablespace defined with an 8K block size and 64GB for a tablespace defined with a 16K block size. So if your tablespace is larger than these max file sizes, you will need multiple datafiles.
When I do think about performance, I keep two things in mind. You can spread out an I/O load by placing your tablespace's datafiles on different disk units. You can also use one of the RAID levels that implements striping to perform this action for you automatically.
Another performance problem, which typically only presents itself with very large databases, concerns the checkpoint process, CKPT. When a checkpoint occurs, the database writes changed blocks in the buffer cache to their datafiles. The CKPT process also updates all datafiles with the latest SCN of the checkpoint, even if that file had no changed blocks. The more datafiles you have, the longer it will take CKPT to complete this process. So if your database is very large, you may want to start using the max datafile size your tablespace can support. Note: CKPT will not update the datafiles that are part of a READ ONLY tablespace.