Why not use long for all integer values

Apart from the Range (the min and max value that can be stored in any particular data type) there is another aspect and that is size of the variable.

You must be aware of the following too:

byte = 1 byte
short = 2 bytes
int = 4 bytes
long = 8 bytes

So using long variable means you are allocating 8 bytes of memory to it.

Something like,

long var = 1000L

does not show efficient use of memory. What if you got GBs of RAM these days does not mean we should waste it.

Simple point I want to make is, more efficient use of memory, faster will be the app.


Does it make sense to use for example, an int data type, instead of a long data type?

ABSOLUTELY YES.


MEMORY / DISK USAGE

Using only one variable or two you won't see difference of performance, but when apps grow it will increase your app speed.

Check this question for further info.

Also looking to Oracle primitive type documentation you can see some advices and the memory usage:

type    memory usage    recommended for
------- --------------- ---------------------------------------------------
byte    8-bit signed    The byte data type can be useful for saving memory in large arrays, where the memory savings actually matters.
short   16-bit signed   same as byte
int     32-bit signed   
long    64-bit          Use this data type when you need a range of values wider than those provided by int
float                   Use a float (instead of double) if you need to save memory in large arrays of floating point numbers. This data type should never be used for precise values, such as currency.

byte:

The byte data type is an 8-bit signed two's complement integer. It has a minimum value of -128 and a maximum value of 127 (inclusive). The byte data type can be useful for saving memory in large arrays, where the memory savings actually matters.

short:

The short data type is a 16-bit signed two's complement integer. It has a minimum value of -32,768 and a maximum value of 32,767 (inclusive). As with byte, the same guidelines apply: you can use a short to save memory in large arrays, in situations where the memory savings actually matters.

int:

By default, the int data type is a 32-bit signed two's complement integer, which has a minimum value of -2³¹ and a maximum value of 2³¹-1. In Java SE 8 and later, you can use the int data type to represent an unsigned 32-bit integer, which has a minimum value of 0 and a maximum value of 2³²-1.

long:

The long data type is a 64-bit two's complement integer. The signed long has a minimum value of -2⁶³ and a maximum value of 2⁶³-1. In Java SE 8 and later, you can use the long data type to represent an unsigned 64-bit long, which has a minimum value of 0 and a maximum value of 2⁶⁴-1. Use this data type when you need a range of values wider than those provided by int.

float:

The float data type is a single-precision 32-bit IEEE 754 floating point. Its range of values is beyond the scope of this discussion, but is specified in the Floating-Point Types, Formats, and Values section of the Java Language Specification. As with the recommendations for byte and short, use a float (instead of double) if you need to save memory in large arrays of floating point numbers. This data type should never be used for precise values, such as currency.


CODE READABILITY

Also, it will clarify your mind and your code, lets say, you have a variable that represents the ID of an object, this object ID will never use decimals, so, if you see in your code:

int id;

you will now for sure how this ID will look, otherwise

double id;

wont.

Also, if you see:

int quantity;
double price;

you will know quantity won't allow decimals (only full objects) but price will do... That makes your job (and others programmers will read your code) easier.