In a personal credential managing app I need to create a floating button that can be dragged within a constrained area and once released will snap to the screen edge. Meanwhile, it also has the button’s nature: can recognize tap gesture and perform the action.
If you have used “floating window” in the WeChat App, it is the same idea. Another example is the iPhone’s “soft” home button. Some people have to use this AssistiveTouch feature if their physical Home button is broken. It also can be dragged around to rearrange but it will snap to the closest edges once you let it go. And it will expand to a bigger view when you tap on it.
For better demonstration, I will create a simple demo app that only has a floating button.
The core of this effect is to configure multiple gesture recognizers on the button so that it can react to all of them. The snap part is relatively straightforward: You just need to check the position of the button when the drag gesture ends and find out which edge is closest then modify the X/Y position of the button.
In my scenario, I want to make sure the user REALLY wants to rearrange the button, not moving it by accident. I use a long press gesture to bring the button into drag mode. After that, it can be dragged, but within a constraint space. Therefore 3 gestures recognizers will be attached to the button: Long press, Drag, and Tap.
To create the long press gesture recognizer, I need a @State variable called buttonInDrag to track if the button has entered drag mode. It is also used on the button’s modifier to determine the scale factor. The long-press gesture has a minimum duration of 0.3s meaning it only recognizes long-press that is longer than 0.3s as a successful gesture.
@State private var buttonInDrag: Bool = false
...
let hapticImpact = UIImpactFeedbackGenerator(style: .medium)
let longPressGesture = LongPressGesture(minimumDuration: 0.3)
.onEnded { finished in
buttonInDrag = true
hapticImpact.impactOccurred()
}
To track and update the button’s position, I need to use GeometryReader so I can create the constraint area. The constraint area is on the right side of the screen and has a width of 80 and a height of (the view height - 100).
var body: some View {
GeometryReader { geometry in
let SnapTrailing: CGRect = CGRect(x: geometry.size.width - 99, y: 20, width: 100, height: geometry.size.height - 100)
let hapticImpact = UIImpactFeedbackGenerator(style: .medium)
let longPressGesture = LongPressGesture(minimumDuration: 0.3)
.onEnded { finished in
buttonInDrag = true
hapticImpact.impactOccurred()
}
...
} //: End of GeometryReader
} //: End of body
In my drag gesture, I need to update the button’s position to make sure the button cannot be dragged out of the constraint area. To achieve this I need to update the button’s position with gesture’s translation data and check if the new position falls outside of the snapArea. If so, just adjust the x or y of the position to be the x or y of the nearest edges.
When the drag gesture ends, I will set the X value of the position to the right-most of the view but keep the Y value of the position unchanged, so the button will snap to the right edge.
CoordinateSpace is the space where the gesture’s movement is based in. It will map the coordinates into the space that you specified. In this case, the parent view is the coordinate space so I gave the parent view a space name of “MasterHostingView” and specified it in the DragGesture.
let dragGesture = DragGesture(minimumDistance: 0, coordinateSpace: CoordinateSpace.named("TrailingSnapArea"))
.updating($startPos) { value, gestureStart, transaction in
gestureStart = gestureStart ?? currPos
}
.onChanged { gesture in
var newLocation = startPos ?? currPos
newLocation.x += gesture.translation.width
newLocation.y += gesture.translation.height
self.currPos = newLocation
if !SnapTrailing.contains(newLocation) {
if newLocation.x <= SnapTrailing.minX || newLocation.x >= SnapTrailing.maxX{
self.currPos.x = newLocation.x <= SnapTrailing.minX ? SnapTrailing.minX : SnapTrailing.maxX
}
if newLocation.y <= SnapTrailing.minY || newLocation.y >= SnapTrailing.maxY {
self.currPos.y = newLocation.y <= SnapTrailing.minY ? SnapTrailing.minY : SnapTrailing.maxY
}
}
}
.onEnded { value in
self.currPos.x = SnapTrailing.maxX
buttonInDrag = false
}
let longDragGesture = longPressGesture.sequenced(before: dragGesture)
Lastly, I need to combine the long press gesture and the drag gesture to make a new gesture. Because the drag gesture should only be recognized after the long press gesture succeeds, I use the .sequenced(before: ) to combine them together.
The rest is straightforward. Use the .simultaneousGesture() view modifier to attach the gesture to the view.
Button(action:{
}) {
Text("Drag me!")
.font(.title)
}
.padding(30)
.background(buttonState ? Color.green : Color.red)
.cornerRadius(12)
.scaleEffect(buttonInDrag ? 1.4 : 1.0)
.animation(.spring(response: 0.25, dampingFraction: 0.59, blendDuration: 0.0), value: buttonInDrag)
.position(currPos)
.simultaneousGesture(longDragGesture)
.simultaneousGesture(TapGesture() .onEnded{
self.buttonState.toggle()
})
Complete code can be found in GitHub. It also contains a view that has 4 snap areas: Leading, Top, Trailing, and Bottom. The button will snap to the corresponding area if released. You can play around with it.