SpriteKit - Getting the actual size/frame of the visible area on scaled scenes .AspectFill

Here is how Aspect Fill works:

The dimensions of your scene will always remain the same, so if you say your scene is 480 points by 800 points, it will always remain that way regardless of device.

What then happens is it will scale the scene starting from the center of the SKView until it fills the farthest walls.

This will in turn crop your scene.

So when designing your game across the different aspect ratios, you will need to design your game to handle cropping.

Now in your particular case, You will need to figure out what the aspect ratio is for the device, and work off the difference from the scene aspect.

E.G. Your scene is 3x5, device is 9x16, This means your device is going to grow to become 9.6x16 to fill up to the size of your 9x16 device. We get this because our height has more distance than our width, so we need to do 3/5 = a/16, 3*16 = a * 5, (3*16)/5 = a so a = 9.6.

This means you have .6 of cropping to your scene on the width, .3 on both edges.

Now you are going to ask, How does the .3 relate to the various screen sizes.

Well we need to figure out what percentage .3 is of 9.6. We do that with division, .3/9.6 = .03125 or 3.125%; This means we need to push our border in 3.125% in the scene's width, which is 15 points. That means .03125 * 480 = 15. Your borders should then start at 15 width and at 465.

Pseudo Code:

let newSceneWidth = (scene.width * view.height) / (scene.height)
let sceneDifference = (newSceneWidth - view.Width)/2
let diffPercentage = sceneDifference / (newSceneWidth)

let leftEdge = diffPercentage * scene.width;
let rightEdge = scene.width - (diffPercentage * scene.width);

Now if there is more distance to scale in the height than the width, you would have to swap the width and height variables.


Thanks for the answers and a special thanks to KnightOfDragon who helped me come to a workaround for my problem. What I have done is created a function that produces a CGRect which takes in consideration the dimensions of a device as well as its predetermined aspect ratio of a scene. This can be used as a boundary and makes it easier for relative positioning of nodes in a scene so that they will be visible for all devices. Again a special thanks to KnightOfDragon who answered with the basis of the code for the solution and without them it would have been difficult and I would't be able to share this overall solution. Here is the final code in Swift:

//Returns a CGRect that has the dimensions and position for any device with respect to any specified scene. This will result in a boundary that can be utilised for positioning nodes on a scene so that they are always visible
func getVisibleScreen(var sceneWidth: Float, var sceneHeight: Float, viewWidth: Float, viewHeight: Float) -> CGRect {
    var x: Float = 0
    var y: Float = 0

    let deviceAspectRatio = viewWidth/viewHeight
    let sceneAspectRatio = sceneWidth/sceneHeight

    //If the the device's aspect ratio is smaller than the aspect ratio of the preset scene dimensions, then that would mean that the visible width will need to be calculated
    //as the scene's height has been scaled to match the height of the device's screen. To keep the aspect ratio of the scene this will mean that the width of the scene will extend
    //out from what is visible. 
    //The opposite will happen in the device's aspect ratio is larger.
    if deviceAspectRatio < sceneAspectRatio {
        let newSceneWidth: Float = (sceneWidth * viewHeight) / sceneHeight
        let sceneWidthDifference: Float = (newSceneWidth - viewWidth)/2
        let diffPercentageWidth: Float = sceneWidthDifference / (newSceneWidth)

        //Increase the x-offset by what isn't visible from the lrft of the scene
        x = diffPercentageWidth * sceneWidth
        //Multipled by 2 because the diffPercentageHeight is only accounts for one side(e.g right or left) not both
        sceneWidth = sceneWidth - (diffPercentageWidth * 2 * sceneWidth)
    } else {
        let newSceneHeight: Float = (sceneHeight * viewWidth) / sceneWidth
        let sceneHeightDifference: Float = (newSceneHeight - viewHeight)/2
        let diffPercentageHeight: Float = fabs(sceneHeightDifference / (newSceneHeight))

        //Increase the y-offset by what isn't visible from the bottom of the scene
        y = diffPercentageHeight * sceneHeight
        //Multipled by 2 because the diffPercentageHeight is only accounts for one side(e.g top or bottom) not both
        sceneHeight = sceneHeight - (diffPercentageHeight * 2 * sceneHeight)
    }

    let visibleScreenOffset = CGRect(x: CGFloat(x), y: CGFloat(y), width: CGFloat(sceneWidth), height: CGFloat(sceneHeight))
    return visibleScreenOffset
}

There is actually a very simple way to do this. The function convertPoint(fromView:) on SKScene allows a point to be converted from its containing SKView's coordinate system into scene space coordinates. You can therefore take the view's origin and a point equal to its farthest extent, convert them both into the scene's coordinate space, and measure the difference like so:

extension SKScene {
    func viewSizeInLocalCoordinates() -> CGSize {
        let reference = CGPoint(x: view!.bounds.maxX, y: view!.bounds.maxY)
        let bottomLeft = convertPoint(fromView: .zero)
        let topRight = convertPoint(fromView: reference)
        let d = topRight - bottomLeft
        return CGSize(width: d.x, height: d.y)
    }
}

(Note that I am also using an extension of CGPoint which allows me to perform addition and subtraction using points directly. See below. Also note that it is only safe to call this function after the scene has been presented; I'm being a bit lazy by force-unwrapping view, which will be nil if the scene is not being displayed yet.)

extension CGPoint {
    static func +(lhs: CGPoint, rhs: CGPoint) -> CGPoint {
        return CGPoint(x: lhs.x + rhs.x, y: lhs.y + rhs.y)
    }
    static func -(lhs: CGPoint, rhs: CGPoint) -> CGPoint {
        return CGPoint(x: lhs.x - rhs.x, y: lhs.y - rhs.y)
    }
}

Important: this assumes that the scene has no camera node assigned, or that the camera node's scale is 1:1. If you add your own camera node and allow it to be scaled in order to create a zoom effect, use the code below instead. Also keep in mind that nodes attached to the camera node itself do not scale with the rest of the scene. I've allowed for this in my code here by providing ignoreCameraScale. Pass true to this parameter if the measurements relate to HUD elements that move and scale with the camera.

extension SKScene {
    func viewSizeInLocalCoordinates(ignoreCameraScale: Bool = false) -> CGSize {
        let reference = CGPoint(x: view!.bounds.maxX, y: view!.bounds.maxY)
        var bottomLeft = convertPoint(fromView: .zero)
        var topRight = convertPoint(fromView: reference)

        if ignoreCameraScale, let camera = camera {
            bottomLeft = camera.convert(bottomLeft, from: self)
            topRight = camera.convert(topRight, from: self)
        }

        let d = topRight - bottomLeft
        return CGSize(width: d.x, height: -d.y)
    }
}