In linear regression models and in logistic linear regression, the “linearity” is that the model is linear in the parameters, often called “betas”.

Let LP be a linear predictor:

LP= beta0 + beta1*x1+beta2*x2

It doesn't matter if the “x:es” are nonlinear like x1= log(x01) and x2= (x02)^2

Substitute betas for z:s and the x:es for k:s if it becomes more clear.
LP = z0 + z1*k1 + z2*k2

(When one is searching for least squares or maximum likelihood, then the x:es are as observed constants (“k:s”) and the betas (“z:s”) are varied to try to find the minimum or maximum.)

In linear regression: E(Y) = mu = LP

In linear logistic regression (Y= 1 or 0):

E(Y) =p and

log(p/(1-p)) = LP

which can be solved to the non-linear link:
p = exp(LP)/(1+exp(LP))

Which is an S-shaped function.

- - -

Of course the model must fit! Maybe the original LP does not fit. Maybe it is needed to include a squared term:
LP2 = beta0 + beta1*x1+beta2*x2 + beta3*(x2)^2
But it will still be a linear model since it is linear in its parameters – the betas.
- - -

When you run this in a software you just declare that you want to run logit (or logistic regression) and tell which variable is the 0/1 variables. So you absolutely do not transform the dependent variable. Then you also declare which are your independent variables (x-variables)

To transform a continuous variable by classifying it in “high” and “low” is to throw away information.