Curry-Howard for an imperative programming language?

One (extreme) perspective on Curry-Howard (as a general principle) is that it states that proof theorists and programming language theorists/type theorists are simply using different words for the exact same things. From this (extreme) perspective, the answer to your first question is trivially "yes". You simply call the types of your imperative language "formulas", you call the programs "proofs", you call the reduction rules of an operational semantics "proof transformations", and you call (the type mapping part of a) denotational sematics an "interpretation" (in the model theoretic sense), and et voilà, we have a proof system.

Of course, for most programming languages, this proof system will have horrible meta-theoretic properties, be inconsistent1, and probably won't have an obvious connection to (proof systems for) usual logics. Interesting Curry-Howard-style results are usually connecting a pre-existing proof system to a pre-existing programming language/type theory. For example, the archetypal Curry-Howard correspondence connects the natural deduction presentation of intuitionistic propositional logic to (a specific presentation) of the simply typed lambda calculus. A sequent calculus presentation of intuitonistic propositional logic would (most directly) connect to a different presentation of the simply typed lambda calculus. For example, in the presentation corresponding to natural deduction (and assuming we have products), you'd have terms $\pi_i:A_1\times A_2\to A_i$ and $\langle M,N\rangle : A_1\times A_2$ where $M:A_1$ and $N:A_2$. In a sequent calculus presentation, you wouldn't have the projections but instead have a pattern matching $\mathsf{let}$, e.g. $\mathsf{let}\ \langle x,y\rangle = T\ \mathsf{in}\ U$ where $T:A_1\times A_2$ and $U:B$ and $x$ and $y$ can occur free in $U$ with types $A_1$ and $A_2$ respectively. As a third example, a Hilbert-style presentation of minimal logic (the implication only fragment of intuitionistic logic) gives you the SKI combinatory logic. (My understanding is that this third example is closer to what Curry did.)

Another way of using Curry-Howard is to leverage tools and intuitions from proof theory for computational purposes. Atsushi Ohori's Register Allocation by Proof Transformation is an example of a proof system built around a simple imperative language (meant to be low-level). This system is not intended to correspond to any pre-existing logic. Going the other way, computational interpretations can shed light on existing proof systems or suggest entirely new logics and proof systems. Things like the Concurrent Logical Framework clearly leverage the interplay of proof theory and programming.

Let's say we start with a typed lambda calculus, for familiarity. What happens when we add various effects? We actually know in many cases (though usually the effects need to be carefully controlled, more so than they usually are in typical programming languages). The most well-known example is callcc which gives us a classical logic (though, again, a lot of care is needed to produce a well-behaved proof system). Slide 5 (page 11) of this slide set mentions several others and a scheme for encoding them into Coq. (You may find the entire slide set interesting and other work by Pierre-Marie Pédrot.) Adding effects tends to produce powerful new proof rules and non-trivial extensions of what's provable (often to the point of inconsistency when done cavalierly). For example, adding exceptions (in the context of intuitionistic dependent type theory) allows us to validate Markov's rule. One way of understanding exceptions is by a monad-style translation and the logical view of this is Friedman's translation. If you take a constructive reading of Markov's rule, it's actually pretty obvious how you could implement it given exceptions. (What's less obvious is if adding exceptions causes any other problems, which, naively, it clearly does, since we can "prove" anything by simply throwing an exception, but if we require all exceptions to be caught...)

Daniel Schepler mentioned Hoare logic (a more modern version would be Hoare Type Theory), and it is worth mentioning how this connects. A Hoare Logic is itself a deductive system (that usually is defined in terms of a traditional logic). The "formulas" are the Hoare triples, i.e. the programs annotated with pre- and post-conditions, so this is a different way of connecting logic and computation than Curry-Howard2.

A less extreme perspective on Curry-Howard would add some conditions on what constitutes an acceptable proof system and, going the other way, what constitutes an acceptable programming language. (It's quite possible to specify proof systems that don't give rise to an obvious notion of computation.) Also, the traditional way of using Curry-Howard to produce a computational system identifies computation with proof normalization, but there are other ways of thinking about computation. Regardless of perspective, the well-behaved proof system/rules are often the most interesting. Certainly from a logical side, but even from the programming side, good meta-theoretical properties of the proof system usually correspond to properties that are good for program comprehension.

1 Traditional discussions of logic tend to dismiss inconsistent logics as completely uninteresting. This is an artifact of proof-irrelevance. Inconsistent logics can still have interesting proof theories! You just can't accept just any proof. Of course, it is still reasonable to focus on consistent theories.

2 I don't mean to suggest that Daniel Schepler meant otherwise.